Araştırma Makalesi
BibTex RIS Kaynak Göster

Yapay Zeka Kullanarak Karmaşık Bir Ortamda Robotları Hareket Ettirme

Yıl 2020, Cilt: 4 Sayı: 2, 225 - 236, 30.12.2020

Öz

Robotlar, farklı ortamlardaki çeşitli görevleri otomatikleştirmek için kullanılıyor. Bu uygulamalardan bazıları, robotların karmaşık ortamlarda gezinmesini ve hedeflerine ulaşmak için engellerden kaçınmasını gerektirir. Bu ortamların dinamik doğasına göre, robotların sürekli değişen ortamları işlemesine izin vermek için Yapay Zeka (AI) kullanılmaktadır. Mevcut teknikler yoğun işleme gücü ve enerji kaynakları gerektirir, bu da istihdamlarını sınırlayan birçok uygulamadır. Bu nedenle, bu çalışmada bir çarpışma tahmin edildiğinde robotun kontrolünü ele almak için yeni bir yöntem önerilmiştir. Çevrenin farklı gösterimleri kullanılır, böylece tarihsel bilgi verimli bir şekilde sağlanabilir. Ancak sonuçlar, tüm partinin kullanımının benzer karmaşıklıkla daha iyi performansa sahip olduğunu göstermektedir. Önerilen yöntem, navigasyon sırasında çarpışma sayısını azaltabilir ve robotun hızını artırabilir.

Proje Numarası

58

Kaynakça

  • Bottou, L. 2014. From machine learning to machine reasoning. Machine learning, 94(2), 133-149.
  • Choi, J., Park, K., Kim, M., & Seok, S. 2019. Deep Reinforcement Learning of Navigation in a Complex and Crowded Environment with a Limited Field of View. Paper presented at the 2019 International Conference on Robotics and Automation (ICRA).
  • Ha, D., & Schmidhuber, J. 2018. Recurrent world models facilitate policy evolution. Paper presented at the Advances in Neural Information Processing Systems.
  • Kahn, G., Villaflor, A., Pong, V., Abbeel, P., & Levine, S. 2017. Uncertainty-aware reinforcement learning for collision avoidance. arXiv preprint arXiv:1702.01182.
  • Kim, K.-S., Kim, D.-E., & Lee, J.-M. 2018. Deep Learning Based on Smooth Driving for Autonomous Navigation. Paper presented at the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM).
  • Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., . . . Wierstra, D. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
  • Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning Machine learning proceedings 1994 (pp. 157-163): Elsevier.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., . . . Ostrovski, G. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540), 529.
  • Robert, C. 2014. Machine learning, a probabilistic perspective: Taylor & Francis.
  • Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., . . . Graepel, T. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140-1144.
  • Tan, M. 1993. Multi-agent reinforcement learning: Independent vs. cooperative agents. Paper presented at the Proceedings of the tenth international conference on machine learning.
  • Watkins, C. J., & Dayan, P. 1992. Q-learning. Machine learning, 8(3-4), 279-292.

Navigating Robots in a Complex Environment with Moving Objects Using Artificial Intelligence

Yıl 2020, Cilt: 4 Sayı: 2, 225 - 236, 30.12.2020

Öz

Robots are being used to automate several tasks in different environments. Some of these applications require the robots to be able to navigate in complex environments and avoid obstacles to reach their destinations. According to the dynamic nature of these environments, Artificial Intelligence (AI) is being used to allow robots handle continuously-changing environments. The existing techniques require intensive processing power and energy sources, which limits their employment is many applications. Thus, a new method is proposed in this study to take control of the robot when a collision is predicted. Different representations of the environment are used, so that, historical information can be provided efficiently. However, the results show that the use of the entire batch has better performance with similar complexity. The proposed method has been able to reduce the number of collision and increasing the speed of the robot during the navigation.

Destekleyen Kurum

altinbas un

Proje Numarası

58

Teşekkür

thanks

Kaynakça

  • Bottou, L. 2014. From machine learning to machine reasoning. Machine learning, 94(2), 133-149.
  • Choi, J., Park, K., Kim, M., & Seok, S. 2019. Deep Reinforcement Learning of Navigation in a Complex and Crowded Environment with a Limited Field of View. Paper presented at the 2019 International Conference on Robotics and Automation (ICRA).
  • Ha, D., & Schmidhuber, J. 2018. Recurrent world models facilitate policy evolution. Paper presented at the Advances in Neural Information Processing Systems.
  • Kahn, G., Villaflor, A., Pong, V., Abbeel, P., & Levine, S. 2017. Uncertainty-aware reinforcement learning for collision avoidance. arXiv preprint arXiv:1702.01182.
  • Kim, K.-S., Kim, D.-E., & Lee, J.-M. 2018. Deep Learning Based on Smooth Driving for Autonomous Navigation. Paper presented at the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM).
  • Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., . . . Wierstra, D. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
  • Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning Machine learning proceedings 1994 (pp. 157-163): Elsevier.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., . . . Ostrovski, G. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540), 529.
  • Robert, C. 2014. Machine learning, a probabilistic perspective: Taylor & Francis.
  • Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., . . . Graepel, T. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140-1144.
  • Tan, M. 1993. Multi-agent reinforcement learning: Independent vs. cooperative agents. Paper presented at the Proceedings of the tenth international conference on machine learning.
  • Watkins, C. J., & Dayan, P. 1992. Q-learning. Machine learning, 8(3-4), 279-292.
Toplam 13 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Yazılımı
Bölüm Araştırma Makalesi
Yazarlar

Omar Yaseen 0000-0003-3641-8655

Osman Nuri Uçan 0000-0002-4100-0045

Oğuz Bayat 0000-0001-5988-8882

Proje Numarası 58
Yayımlanma Tarihi 30 Aralık 2020
Gönderilme Tarihi 6 Temmuz 2020
Kabul Tarihi 25 Aralık 2020
Yayımlandığı Sayı Yıl 2020 Cilt: 4 Sayı: 2

Kaynak Göster

APA Yaseen, O., Uçan, O. N., & Bayat, O. (2020). Navigating Robots in a Complex Environment with Moving Objects Using Artificial Intelligence. AURUM Journal of Engineering Systems and Architecture, 4(2), 225-236.