Search results
4 records were found.
This paper describes the latest innovations in agricultural robotics, specifically for weed control, harvesting and monitoring, taking into account the challenges of introducing robotics in this sector, such as fruit detection, orchard navigation, task planning algorithms, or sensors optimization. One of the trends in precision agriculture is the introduction of swarm robotics, allowing collaboration between robots. Another trend is in aerial imagery acquisition for ground analysis as well as environmental reconstruction, complemented by field-mounted sensors. Although robots are becoming quite important in the evolution of agriculture, it is still unlikely that all tasks will be automated in the near future due to the complexity arisen by the overall variability of cultures. The analysis of the current state of the art allows the proposal of a robotic rover for multipurpose agricultural activities (R2A2), developed to perform particular and controlled spraying, to pick up fallen fruits and to predict fruit production in peach orchards. These tasks are performed in different period of the campaign, allowing to use of the same robotic platform for different activities. The tasks performed by the robotic platform aim to help increasing productivity, by accurate fruit counting, that allows decision making concerning water requirements and the reduction of herbicide and pesticide applications.The design and construction of this platform aims to be an additional contribution for the rising of agricultural robotics.
The introduction of robotics in certain activities such as fruit harvesting, weed control, monitoring, spraying, soil handling, autonomous navigation, among others, contribute technologically to the sector efficacy and efficiency. The paper presents the design of a robotic gripper for picking up the fallen fruits located at the orchard ground at the end of the season, aiming to contribute for the sustainability of the agricultural processes. The prototype is coupled to a robotic platform's Cartesian manipulator. The technical specifications of the gripper were set through a decision matrix based on a literature review. The gripper was modeled three-dimensionally (3D) using computer-assisted design(CAD), leading to 3D printing model by polylactic acid (PLA) fused deposition modeling (FDM). The control, regulation and command of the gripper are accomplished by an Arduino microcontroller connected to end-switches to limit the work envelope, and to DC motors that carried out the Cartesian manipulation arm and gripper movements. Experimental tests were carried out to evaluate the performance of the gripper in picking fruits, depending on the inclination of the robotic platform and positions of the fruit (central and lateral). The experimental results allow to conclude that the robotic gripper fulfills the objectives for which it was developed.
Introducing robotics in agriculture can allow a rise in productivity and a reduction in costs and waste. Its capabilities can be enhanced to or above the human level, enabling a robot to function as a human does, but with higher precision, repeatability, and with little to no effort. This paper develops a detection algorithm of peach trunks in orchard rows, as autonomous navigation and anti-bump auxiliary system of a terrestrial robotic rover for agricultural applications. The approach involved computational vision, more specifically, the creation of an object detection model based on Convolutional Neural Networks. The framework of the algorithm is Tensorflow, for implementation in a Raspberry Pi 4. The model’s core is the detection system SSD MobileNet 640×640 with transfer learning from the COCO 2017 database. 89 pictures were captured for the database of the model, of which 90% were used for training and the other 10% for testing. The model was converted for mobile applications with a full integer quantization, from 32floatto uint8, and it was compiled for Edge TPU support. The orientation strategy consists of two conditions: a double detection forms a linear function, represented by an imaginary line, which updates every two simultaneous trunks detected. Through the slope of this function and the horizontal deviation of a single detected bounding box from the created line, the algorithm orders the robot to adjust the orientation or keep moving forward. The arithmetic evaluation of the model shows a precision and recall of 94.4%. After the quantization, the new values of these metrics are 92.3 and 66.7%, respectively. These simulation results prove that, statistically, the model can perform the navigation task.
As a result of the increasing world population and the growing demand for food, there is a great need to increase agricultural productivity. However, the problem is aggravated by the population migration from rural areas to cities, which causes a decrease in the workforce in the agricultural sector. In this regard, within the concept of agriculture 4.0, the introduction of autonomous robots in agricultural activities may face these problems, supporting the growing lack of labor and promoting increased agricultural productivity. This work exposes the algorithm used to perform autonomous navigation, based on global positioning systems (GPS), of the robotic rover for agricultural applications (R2A2) multitasking aimed at performing herbicide spraying. An augmented reality (AR) web application has also been developed to assist in the supervision of autonomous vehicles. A code in C/C++ was developed for the autonomous movement of the robotic platform using an Arduino Mega2560 as the main microprocessor and the AR web application based on the positioning which was developed using the AR.JS libraries and the A-FRAME framework compiled in a code HTML. The application was tested in a peach orchard and presented an average of approximately 94% of correct answers, which reveals the accuracy of the technological solution developed. Also exposed are the results and conclusions of the autonomous movement algorithm and the web application.