Show simple item record

dc.contributor.advisorJiménez Moreno, Robinsonspa
dc.contributor.authorPinzón Arenas, Javier Orlando
dc.contributor.otherRubiano Fonseca, Astridspa
dc.coverage.spatialCalle 100spa
dc.date.accessioned2019-12-17T17:48:47Z
dc.date.accessioned2019-12-26T22:59:14Z
dc.date.available2019-12-17T17:48:47Z
dc.date.available2019-12-26T22:59:14Z
dc.date.issued2019-07-10
dc.identifier.urihttp://hdl.handle.net/10654/32762
dc.description.abstractEl presente trabajo esboza la implementación de un algoritmo para el control de un robot asistencial, el cual se enfoca en la alimentación asistida. El algoritmo aplicado al robot tiene 3 pilares fundamentales para su funcionamiento: detección de existencia o no de alimento, toma de decisiones frente a situaciones de obstaculización de su trayectoria con la mano del usuario y ejecución de la tarea de alimentación, llegando hasta el punto de contacto. Para esto se emplean técnicas de inteligencia artificial basadas en aprendizaje profundo, así como el uso de una cámara RGB-D encargada de capturar la información del entorno, con el fin de que pueda ser procesada para realizar la asistencia. Para la detección de los estados de la boca, para saber si el usuario se encuentra masticando o esperando comida, se utilizó una red neuronal con gran memoria a corto plazo, obteniendo un 99.3% de exactitud. El reconocimiento de existencia o no de alimentos en el plato, se efectuó con una red neuronal convolucional, la cual alcanzó un desempeño del 98.7%. En cuanto a la detección de obstáculos, se define la mano del usuario como el obstáculo, el cual es reconocido y localizado por medio de una red neuronal convolucional basada en regiones, logrando obtener un 77.4% de precisión en la intercepción sobre la unión media de la localización de los recuadros sobre los originales. Con las funcionalidades implementadas, se crea una interfaz de usuario en donde todos los algoritmos son acoplados dentro de un solo sistema, para generar la tarea de asistir a un usuario en su alimentación. La emulación de esta se realiza por medio de la interacción del entorno real, en donde el usuario se encuentra, y un entorno simulado, donde el robot va a realizar los movimientos, pasando la información tridimensional real de las situaciones presentadas al entorno simulado. Con esto, se realizan pruebas de funcionamiento del sistema, logrando evidenciar un desempeño alto de cada una de las funciones dentro de 3 variaciones del entorno real, alterando principalmente la iluminación, desde una baja calidad de iluminación hasta una con mucho brillo.spa
dc.description.tableofcontentsLISTA DE FIGURAS II LISTA DE TABLAS IV LISTA DE ABREVIATURAS V RESUMEN VI CAPÍTULO 1 INTRODUCCIÓN 1 1.1. PLANTEAMIENTO DEL PROBLEMA 3 1.2. JUSTIFICACIÓN 4 1.3. OBJETIVOS 5 1.3.1. Objetivo General 5 1.3.2. Objetivos Específicos 5 1.4. PRESENTACIÓN DEL DOCUMENTO 6 CAPÍTULO 2 ANTECEDENTES Y ESTADO DEL ARTE 7 2.1. DEEP LEARNING 7 2.2. ROBÓTICA ASISTENCIAL 9 CAPÍTULO 3 MARCO TEÓRICO 12 3.1. VISIÓN DE MÁQUINA 12 3.2. DEEP LEARNING 13 3.2.1. Redes Neuronales Convolucionales 13 3.2.2. Redes Neuronales Convolucionales Basadas en Regiones 17 3.2.3. Redes Neuronales Recurrentes con Gran Memoria a Corto Plazo 18 3.3. ROBÓTICA ASISTENCIAL 21 CAPÍTULO 4 MATERIALES Y MÉTODOS 22 4.1. PASOS METODOLÓGICOS 22 4.2. HERRAMIENTAS E INSTRUMENTOS 24 4.3. PARTICIPANTES 25 4.4. CONSIDERACIONES ÉTICAS 25 CAPÍTULO 5 ANÁLISIS Y RESULTADOS 26 5.1. DETECCIÓN DE ALIMENTOS 27 5.1.1. Construcción de la base de datos 28 5.1.2. Implementación de la red neuronal convolucional 30 5.2. IDENTIFICACIÓN DE OBSTÁCULO Y REACCIÓN ANTE SITUACIONES DE OBSTACULIZACIÓN 34 5.2.1. Construcción de la base de datos 35 5.2.2. Implementación de la Faster R-CNN 36 5.2.3. Reacción ante la presencia de obstáculos 45 5.3. IDENTIFICACIÓN DE ESTADOS DE LA BOCA 51 5.3.1. Extracción de características 52 5.3.2. Sistema de reconocimiento de los estados 56 5.4. ACOPLE DE ALGORITMOS 61 CAPÍTULO 6 CONCLUSIONES Y TRABAJOS FUTUROS 70 BIBLIOGRAFÍA 73 ANEXOS 80spa
dc.formatpdfspa
dc.format.mimetypeapplication/pdfspa
dc.language.isospaspa
dc.language.isospaspa
dc.publisherUniversidad Militar Nueva Granadaspa
dc.rightsDerechos Reservados - Universidad Militar Nueva Granada, 2019spa
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/2.5/co/spa
dc.titleAlgoritmo de operación para robot asistencial autónomo enfocado a alimentaciónspa
dc.typeinfo:eu-repo/semantics/masterThesisspa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.subject.lembALGORITMOSspa
dc.subject.lembROBOTICAspa
dc.publisher.departmentFacultad de Ingenieríaspa
dc.type.localTesis de maestríaspa
dc.description.abstractenglishThe present work outlines the implementation of an algorithm for the control of a robot assistant, which focuses on assisted feeding. The algorithm applied to the robot has 3 fundamental pillars for its operation: detection of existence or not of food, decision making in the face of situations of obstruction of its trajectory with the user’s hand and execution of the feeding task reaching the point of contact. For this, artificial intelligence techniques based on deep learning are used, as well as the use of a RGB-D camera in charge of capturing information from the environment, so that it can be processed to perform assistance. For the detection of the states of the mouth, to know if the user is chewing or waiting for food, a neuronal network with long short-term memory was used, obtaining a 99.3% accuracy in its validation tests. The recognition of the existence or not of food in the dish, was made with a convolutional neuronal network, which reached a performance of 98.7%. Regarding the detection of obstacles, the user's hand is defined as the obstacle, which is recognized and localized by means of a convolutional neural network based on regions, achieving a 77.4% mean precision in the interception over union of the location of the estimated boxes against the original (or labeled) ones. With the functionalities implemented, a graphic user interface is created in which all the algorithms are coupled within a single system, to generate the task of assisting a user in their feeding. The emulation of this task is done through the interaction of the real environment, where the user is, and a simulated environment, where the robot will perform the movements, passing the real three-dimensional information of the situations presented to the simulated environment. With this, the system's performance tests are carried out, demonstrating a high performance of each of the functions within 3 variations of the real environment, altering the lighting, ranging from a low quality of lighting to a bright one.eng
dc.title.translatedOperation algorithm for autonomous feeding assistance robotspa
dc.subject.keywordsFeeding Assistancespa
dc.subject.keywordsDeep Learningspa
dc.subject.keywordsConvolutional Neural Networkspa
dc.subject.keywordsHuman-Machine contact pointspa
dc.subject.keywordsNeural network with memoryspa
dc.publisher.programMaestría en Ingeniería Mecatrónicaspa
dc.creator.degreenameMagíster en Ingeniería Mecatrónicaspa
dc.description.degreelevelMaestríaspa
dc.publisher.facultyIngeniería - Maestría en Ingeniería Mecatrónicaspa
dc.type.dcmi-type-vocabularyTextspa
dc.type.versioninfo:eu-repo/semantics/acceptedVersionspa
dc.rights.creativecommonsAtribución-NoComercial-SinDerivadasspa
dc.relation.referencesFELTAN, Corina Maria, et al. ROBOT AUTÓNOMO LIMPIADOR DE PISO. Salão do Conhecimento, 2017, vol. 3, no 3.spa
dc.relation.referencesDUBEY, Sanjay, et al. An FPGA based service Robot for floor cleaning with autonomous navigation. En Research Advances in Integrated Navigation Systems (RAINS), International Conference on. IEEE, 2016. p. 1-6.spa
dc.relation.referencesVELANDIA, Natalie Segura; BELENO, Ruben D. Hernandez; MORENO, Robinson Jimenez. Applications of Deep Neural Networks. International Journal of Systems Signal Control and Engineering Applications, 2017, vol. 10, no 1.spa
dc.relation.referencesWHITLEY, Darrell. A genetic algorithm tutorial. Statistics and computing, 1994, vol. 4, no 2, p. 65-85.spa
dc.relation.referencesKARABOGA, Dervis; AKAY, Bahriye. A comparative study of artificial bee colony algorithm. Applied mathematics and computation, 2009, vol. 214, no 1, p. 108-132.spa
dc.relation.referencesABE, Shigeo. Support vector machines for pattern classification. London: Springer, 2005.spa
dc.relation.referencesHOPFIELD, John J. Artificial neural networks. IEEE Circuits and Devices Magazine, 1988, vol. 4, no 5, p. 3-10.spa
dc.relation.referencesLECUN, Yann; BENGIO, Yoshua; HINTON, Geoffrey. Deep learning. Nature, 2015, vol. 521, no 7553, p. 436.spa
dc.relation.referencesHINTON, Geoffrey E.; OSINDERO, Simon; TEH, Yee-Whye. A fast learning algorithm for deep belief nets. Neural computation, 2006, vol. 18, no 7, p. 1527-1554.spa
dc.relation.referencesPASCANU, Razvan, et al. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013.spa
dc.relation.referencesZEILER, Matthew D.; FERGUS, Rob. Visualizing and understanding convolutional networks. En European conference on computer vision. Springer, Cham, 2014. p. 818-833.spa
dc.relation.referencesLECUN, Yann, et al. Backpropagation applied to handwritten zip code recognition. Neural computation, 1989, vol. 1, no 4, p. 541-551.spa
dc.relation.referencesKRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. Imagenet classification with deep convolutional neural networks. En Advances in neural information processing systems. 2012. p. 1097-1105.spa
dc.relation.referencesKIM, Kyung-Min, et al. Pororobot: A deep learning robot that plays video Q&A games. En Proceedings of AAAI Fall Symposium on AI for HRI. 2015.spa
dc.relation.referencesTAPUS, Adriana; MATARIC, Maja J.; SCASSELLATI, Brian. Socially assistive robotics [grand challenges of robotics]. IEEE Robotics & Automation Magazine, 2007, vol. 14, no 1, p. 35-42.spa
dc.relation.referencesFEIL-SEIFER, David; MATARIC, Maja J. Defining socially assistive robotics. En Rehabilitation Robotics, 2005. ICORR 2005. 9th International Conference on. IEEE, 2005. p. 465-468.spa
dc.relation.referencesSONG, Won-Kyung; KIM, Jongbae. Novel assistive robot for self-feeding. En Robotic Systems-Applications, Control and Programming. InTech, 2012.spa
dc.relation.referencesNAGI, Jawad, et al. Max-pooling convolutional neural networks for vision-based hand gesture recognition. En Signal and Image Processing Applications (ICSIPA), 2011 IEEE International Conference on. IEEE, 2011. p. 342-347.spa
dc.relation.referencesBROEKENS, Joost, et al. Assistive social robots in elderly care: a review. Gerontechnology, 2009, vol. 8, no 2, p. 94-103.spa
dc.relation.referencesORIENT-LÓPEZ, F., et al. Tratamiento neurorrehabilitador de la esclerosis lateral amiotrófica. Rev Neurol, 2006, vol. 43, no 9, p. 549-55.spa
dc.relation.referencesRichardson Products Incorporated, Meal Buddy Robotic Assistive Feeder, 2017. [Online]. Disponible en: http://www.richardsonproducts.com/mealbuddy.htmlspa
dc.relation.referencesMinisterios de Salud y Protección Social, Sala situacional de las Personas con Discapacidad (PCD), 2017, Recuperado de: https://www.minsalud.gov.co/sites/rid/Lists/BibliotecaDigital/RIDE/DE/PES/presentacion-sala-situacional-discapacidad-2017.pdfspa
dc.relation.referencesPARODI, José Francisco, et al. Factores de Riesgo Asociados al Estrés Del Cuidador Del Paciente Adulto Mayor. Rev Aso Colomb Gerontol Geriatr, 2011, vol. 25, no 2, p. 1503-1514.spa
dc.relation.referencesPAZ RODRÍGUEZ, F.; ANDRADE PALOS, P.; LLANOS DEL PILAR, A. M. Consecuencias emocionales del cuidado del paciente con esclerosis lateral amiotrófica. Rev Neurol, 2005, vol. 40, no 8, p. 459-64.spa
dc.relation.referencesABDEL-ZAHER, Ahmed M.; ELDEIB, Ayman M. Breast cancer classification using deep belief networks. Expert Systems with Applications, 2016, vol. 46, p. 139-144.spa
dc.relation.referencesZHENG, Wei-Long, et al. EEG-based emotion classification using deep belief networks. En Multimedia and Expo (ICME), 2014 IEEE International Conference on. IEEE, 2014. p. 1-6.spa
dc.relation.referencesOLATOMIWA, Lanre, et al. A support vector machine–firefly algorithm-based model for global solar radiation prediction. Solar Energy, 2015, vol. 115, p. 632-644.spa
dc.relation.referencesHONG, Haoyuan, et al. Spatial prediction of landslide hazard at the Yihuang area (China) using two-class kernel logistic regression, alternating decision tree and support vector machines. Catena, 2015, vol. 133, p. 266-281.spa
dc.relation.referencesRUBIANO, A., et al. Elbow flexion and extension identification using surface electromyography signals. En Signal Processing Conference (EUSIPCO), 2015 23rd European. IEEE, 2015. p. 644-648.spa
dc.relation.referencesSERCU, Tom, et al. Very deep multilingual convolutional neural networks for LVCSR. En Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016. p. 4955-4959.spa
dc.relation.referencesARENAS, Javier Orlando Pinzón; MURILLO, Paula Catalina Useche; MORENO, Robinson Jiménez. Convolutional neural network architecture for hand gesture recognition. En Electronics, Electrical Engineering and Computing (INTERCON), 2017 IEEE XXIV International Conference on. IEEE, 2017. p. 1-4.spa
dc.relation.referencesBARROS, Pablo, et al. A multichannel convolutional neural network for hand posture recognition. En International Conference on Artificial Neural Networks. Springer, Cham, 2014. p. 403-410.spa
dc.relation.referencesPARKHI, Omkar M., et al. Deep Face Recognition. En BMVC. 2015. p. 6.spa
dc.relation.referencesTAJBAKHSH, Nima, et al. Convolutional neural networks for medical image analysis: Full training or fine tuning?. IEEE transactions on medical imaging, 2016, vol. 35, no 5, p. 1299-1312.spa
dc.relation.referencesGIRSHICK, Ross, et al. Rich feature hierarchies for accurate object detection and semantic segmentation. En Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 580-587.spa
dc.relation.referencesARENAS, Javier O. Pinzón; MORENO, Robinson Jiménez; MURILLO, Paula C. Useche. Hand Gesture Recognition by Means of Region-Based Convolutional Neural Networks. Contemporary Engineering Sciences, 2017, vol. 10, no 27, p. 1329-1342.spa
dc.relation.referencesREN, Shaoqing, et al. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 2017, vol. 39, no 6, p. 1137-1149.spa
dc.relation.referencesHE, Kaiming, et al. Mask r-cnn. En Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017. p. 2980-2988.spa
dc.relation.referencesGUBBI, Jayavardhana, et al. Internet of Things (IoT): A vision, architectural elements, and future directions. Future generation computer systems, 2013, vol. 29, no 7, p. 1645-1660.spa
dc.relation.referencesFERRÚS, Rafael Mateo; SOMONTE, Manuel Domínguez. Design in robotics based in the voice of the customer of household robots. Robotics and Autonomous Systems, 2016, vol. 79, p. 99-107.spa
dc.relation.referencesSoftBank Robotics, Who is NAO?, 2017. [Online]. Disponible en: https://www.ald.softbankrobotics.com/en/robots/naospa
dc.relation.referencesMARTENS, Christian; PRENZEL, Oliver; GRÄSER, Axel. The rehabilitation robots FRIEND-I & II: Daily life independency through semi-autonomous task-execution. En Rehabilitation robotics. InTech, 2007.spa
dc.relation.referencesKITTMANN, Ralf, et al. Let me introduce myself: I am Care-O-bot 4, a gentleman robot. Mensch und computer 2015–proceedings, 2015.spa
dc.relation.referencesEclipse Automation, Obi, 2017. [Online]. Disponible en: https://meetobi.com/spa
dc.relation.referencesSECOM, My Spoon: Meal-assistance Robot, 2017. [Online]. Disponible en: https://www.secom.co.jp/english/myspoon/index.htmlspa
dc.relation.referencesCamanio Care AB, Bestic: increase your mealtime independence, 2017. [Online]. Disponible en: http://www.camanio.com/us/products/bestic/spa
dc.relation.referencesPARK, Daehyung, et al. “A multimodal execution monitor with anomaly classification for robot-assisted feeding”, En 2016 IEEE International Conference on Robots and Systems (IROS). 2017.spa
dc.relation.referencesSNYDER, Wesley E.; QI, Hairong. Machine vision. Cambridge University Press, 2010.spa
dc.relation.referencesPÉREZ, Luis, et al. Robot guidance using machine vision techniques in industrial environments: A comparative review. Sensors, 2016, vol. 16, no 3, p. 335.spa
dc.relation.referencesDAVIES, E. Roy. Machine vision: theory, algorithms, practicalities. Elsevier, 2004.spa
dc.relation.referencesCUBERO, Sergio, et al. Automated systems based on machine vision for inspecting citrus fruits from the field to postharvest—a review. Food and Bioprocess Technology, 2016, vol. 9, no 10, p. 1623-1639.spa
dc.relation.referencesRAUTARAY, Siddharth S.; AGRAWAL, Anupam. Vision based hand gesture recognition for human computer interaction: a survey. Artificial Intelligence Review, 2015, vol. 43, no 1, p. 1-54.spa
dc.relation.referencesHE, Kaiming, et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. En Proceedings of the IEEE international conference on computer vision. 2015. p. 1026-1034.spa
dc.relation.referencesCIRESAN, Dan C., et al. Flexible, high performance convolutional neural networks for image classification. En IJCAI Proceedings-International Joint Conference on Artificial Intelligence. 2011. p. 1237.spa
dc.relation.referencesJIN, Jonghoon; DUNDAR, Aysegul; CULURCIELLO, Eugenio. Flattened convolutional neural networks for feedforward acceleration. arXiv preprint arXiv:1412.5474, 2014.spa
dc.relation.referencesSRIVASTAVA, Nitish, et al. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 2014, vol. 15, no 1, p. 1929-1958.spa
dc.relation.referencesIOFFE, Sergey; SZEGEDY, Christian. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. En International Conference on Machine Learning. 2015. p. 448-456.spa
dc.relation.referencesLECUN, Yann, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, vol. 86, no 11, p. 2278-2324.spa
dc.relation.referencesRUDER, Sebastian. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.spa
dc.relation.referencesSUTSKEVER, Ilya, et al. On the importance of initialization and momentum in deep learning. En International conference on machine learning. 2013. p. 1139-1147.spa
dc.relation.referencesBADRINARAYANAN, Vijay; KENDALL, Alex; CIPOLLA, Roberto. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, vol. 39, no 12, p. 2481-2495.spa
dc.relation.referencesNURHADIYATNA, Adi; LONČARIĆ, Sven. Semantic image segmentation for pedestrian detection. En Image and Signal Processing and Analysis (ISPA), 2017 10th International Symposium on. IEEE, 2017. p. 153-158.spa
dc.relation.referencesRONNEBERGER, Olaf; FISCHER, Philipp; BROX, Thomas. U-net: Convolutional networks for biomedical image segmentation. En International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015. p. 234-241.spa
dc.relation.referencesWOLF, Fabian. Densely Connected Convolutional Networks for Word Spotting in Handwritten Documents. Tesis de Maestría, Universidad Técnica de Dortmund, 2018.spa
dc.relation.referencesZITNICK, C. Lawrence; DOLLÁR, Piotr. Edge boxes: Locating object proposals from edges. En European conference on computer vision. Springer, Cham, 2014. p. 391-405.spa
dc.relation.referencesMEDSKER, Larry; JAIN, Lakhmi C. Recurrent neural networks: design and applications. CRC press, 1999.spa
dc.relation.referencesBENGIO, Yoshua, et al. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 1994, vol. 5, no 2, p. 157-166.spa
dc.relation.referencesHOCHREITER, Sepp; SCHMIDHUBER, Jürgen. Long short-term memory. Neural computation, 1997, vol. 9, no 8, p. 1735-1780.spa
dc.relation.referencesFLACH, Peter; KULL, Meelis. Precision-recall-gain curves: PR analysis done right. En Advances in Neural Information Processing Systems. 2015. p. 838-846.spa
dc.relation.referencesVIOLA, Paul; JONES, Michael J. Robust real-time face detection. International journal of computer vision, 2004, vol. 57, no 2, p. 137-154.spa
dc.relation.referencesBRADLEY, Derek; ROTH, Gerhard. Adaptive thresholding using the integral image. Journal of graphics tools, 2007, vol. 12, no 2, p. 13-21.spa
dc.relation.referencesSONKA, Milan; HLAVAC, Vaclav; BOYLE, Roger. Image processing, analysis, and machine vision. Cengage Learning, 2014.spa
dc.relation.referencesBALTRUSAITIS, Tadas, et al. Openface 2.0: Facial behavior analysis toolkit. En 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 2018. p. 59-66.spa
dc.relation.referencesZADEH, Amir, et al. Convolutional experts constrained local model for 3d facial landmark detection. En Proceedings of the IEEE International Conference on Computer Vision. 2017. p. 2519-2528.spa
dc.relation.referencesPITONAKOVA, Lenka, et al. Feature and performance comparison of the V-REP, Gazebo and ARGoS robot simulators. En Annual Conference Towards Autonomous Robotic Systems. Springer, Cham, 2018. p. 357-368.spa
dc.relation.referencesSPONG, Mark W.; VIDYASAGAR, Mathukumalli. Robot dynamics and control. John Wiley & Sons, 2008.spa
dc.relation.referencesPALOMARES, Fernando Giménez; SERRÁ, Juan Antonio Monsoriu; MARTÍNEZ, Elena Alemany. Aplicación de la convolución de matrices al filtrado de imágenes. Modelling in Science Education and Learning, 2016, vol. 9, no 1, p. 97-108.spa
dc.subject.proposalAlimentación Asistidaspa
dc.subject.proposalAprendizaje Profundospa
dc.subject.proposalRed Neuronal Convolucionalspa
dc.subject.proposalPunto de contacto hombre-máquinaspa
dc.subject.proposalRed neuronal con memoriaspa


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Derechos Reservados - Universidad Militar Nueva Granada, 2019
Except where otherwise noted, this item's license is described as Derechos Reservados - Universidad Militar Nueva Granada, 2019