![]() By delivering the migrants there, the masters of the Maltese vessels, and perhaps the European rescue authorities involved, may have violated the international law of the sea, which requires ship masters to return people they rescue to a safe port. Five more migrants died on the southward journey. The private, Malta-based vessels picked up the survivors, steamed about 237 kilometers south, and handed over the migrants to authorities in Libya, which was and is in the midst of a civil war, rather than return to Malta, 160 km away. By the time those boats arrived, three migrants had drowned trying to swim to the idle ship. Instead, Maltese authorities told the ship’s captain to wait for vessels from Malta to pick up the migrants. But neither the ship nor the aircraft came to the rescue. The ship’s captain stopped the engines, and the aircraft flashed its lights at the rubber boat. It was after midnight in the Maltese search-and-rescue zone of the Mediterranean when a rubber boat originating from Libya carrying dozens of migrants encountered a hulking cargo ship from Madeira and a European military aircraft. To achieve this, they aim to incorporate an algorithm that estimates the positions of the camera as it moves. But the researchers are interested in devising a more flexible design, where the exact position of the camera does not need to be known beforehand. The current EOMVS design requires knowledge of where the camera is positioned in order to piece together and analyze the data. He says his team is planning to commercialize this new design, as well as build upon it. Korea Advanced Institute of Science and Technology also tested the camera system in simulated environments using the 3D computer graphics software Blender. "In that sense, 3D mapping with drones can be the most promising real-world application of the EOMVS."Īlong with testing EOMVS in a real-world setting, Yoon et al. " we can detect and track very fasting moving objects under very severe illumination without losing them in the field of view," says Yoon. The approach also proved to meet all the desirable features expected with such a combination of cameras. Yoon notes that the EOMVS system was very accurate at mapping out 3D spaces, with an error rate of 3 percent. ![]() They tested the system with the field of view set at 145°, 160°, and 180°. ![]() In their paper, the authors note that they believe this work is the first attempt to set up and calibrate an omnidirectional event camera and use it to solve a vision task. The results were published July 9 in IEEE Robotics and Automation Letters. The researchers tested their design against LiDAR measurements, which are known to be highly accurate for mapping out 3D spaces. Yoon and his colleagues used a similar approach, but rather than using images, the new EOMVS approach reconstructs 3D spaces using event data captured by the modified event camera. An approach that's commonly used involves taking multiple images from different camera angles in order to reconstruct 3D information. Next, software is needed to reconstruct 3D scenes with high accuracy. The new system uses an omnidirectional event camera setup consisting of a DVXplorer event camera (rear) and an Entaniya Fisheye lens (front). In terms of hardware, this means incorporating a fisheye lens with an event-based camera. His team sought to combine these approaches in a new design called event-based omnidirectional multi-view stereo (EOMVS). "On the other hand, omni-directional cameras (cameras with fisheye lenses) allow us to get visual information from much wider views." "The event camera has much less latency, less motion blur, much higher dynamic range, and excellent power efficiency," he says. Kuk-Jin Yoon, an associate professor at the Visual Intelligence Lab at the Korea Advanced Institute of Science and Technology, notes that both camera types offer advantages that are desirable for computer vision. The second is an omni-directional (or fisheye) camera, which captures very wide angles. The first type of camera used in this new design is an event-based camera, which excels at capturing fast-moving objects. In a recent advancement, a research team in South Korea has combined two different types of cameras in order to better track fast-moving objects and create 3-D maps of challenging environments. Us humans will have to make do with the vision system we've evolved, but computer vision is always reaching new limits. Imagine being able to see fast-moving objects under poor lighting conditions – all with a wider angle of view.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |