Main components that were created are as follows: Battery pack, Sensor feedback, Control system, and Body design. These improvements were made while keeping ease of manufacturing and cost of production in mind.
For rapid prototyping purposes, FormLabs 3BL SLA (Stereolithography) 3D printer was used for printing most of the body parts and the Tough 1500 resin material was used. This specific resin allowed strong structural rigidity as well as the flexibility needed for the robotic dogs main components. Improvements on the physical structure allowed different mounting systems to mount cameras, Lidars, and robotic arms.
These attachments allowed more applications for the Robot. Further improving its use cases and applications while keeping costs low.
Visual and Lidar based Simultaneous Localization and Mapping (SLAM) can now be implemented on this robot platform.
Allowing possibility for the robot to autonomously roam and set directions. Autonomous drivability is crucial in automated data collection to increase efficiency and safety. With sensory feedback system, robots course of actions can be automated while accounting for non planned obstacles such as changes in its environment. It can avoid obstacles such as people, machinery, and environmental weather conditions. Autonomous drive ability allows it to be well suited for work environments that could be dangerous for human workers or even for recon missions. Opening up the field of opportunities in national defense.
To conclude, research and development of the robotic dog platform allowed further expansions in its use cases in real life scenarios. Enhancements in software as well as mechanical properties expanded it's ability to be used autonomously. Expanding its applications for surveillance and data collection purposes. We hope to further expand and test its capability in the filed of national defense as well as urban work areas.
Edge computing hardware collects data from a LiDAR sensor, which is later merged with various LiDARs across a variety of edge nodes, giving us a high-quality map of an entire city or region. From here, we can locate robots and vehicles we want to control, and begin giving commands based on various path-planning algorithms.
Progress Recording: https://drive.google.com/file/d/1RjL3Y3koFQoi1-wB3CBv734iHj2BbQk1/view
What you'll see in the video is the robot switching from its camera frame to its SP LiDAR frame. In essence, computer vision for localization, and LiDAR for fine-tuning corrections.
Our tasks included building out hardware (robots, workstations, automation rigs) and the software infrastructure to connect all things together.
Industry 4.0 Experience: https://www.eng.mcmaster.ca/sept/practice/learning-factory/
Users are able to control the robot remotely, or provide it goals on where to go, i.e. on a map where the robot should end off. Using various 2D and 3D sensors, the robot will localize itself on a map, create a path to its goal, and avoid obstacles on the way there. “The global autonomous mobile robots market size was valued at USD 1.9 billion in 2019 and is expected to grow at a compound annual growth rate (CAGR)of 19.6% from 2020 to 2027.