December 2016 snapshots: Scientific imaging, autonomous vehicles, security and surveillance imaging
Researchers use high speed camera to study lightning
A Florida Institute of Technology (Melbourne, FL, USA; www.fit.edu) team deployed a high-speed camera to study atmospheric events.
As part of a grant funded by the National Science Foundation. Dr. Ningyu Liu, an Associate Professor, wrote the grant proposal to learn more about lightning, and shorter duration high-altitude discharges called jets and sprites.
Liu, along with Dr. Hamid Rassoul, and a group of Ph.D. students in the Department of Physics and Space Sciences used a Phantom v1210 digital ultra-high-speed camera from Vision Research (Wayne, NJ, USA; www.phantomhighspeed.com). Featuring a 1280 x 800 CMOS image sensor with a 28 μm pixel size and 12-bit depth, the camera achieves speeds of 12,000 fps at full resolution, and up to 820,000 fps at reduced resolution. The thermoelectrically and heat pipe-cooled camera features a GigE interface, 10Gb Ethernet, direct recording to CineMag, and "quiet fans" for vibration sensitive applications.
Lightning strikes are recorded from inside and on top of buildings on the Florida university's campus using the highest frame rate possible that allows the team to account for the large spatial extent of lighting, all while recording at up to 22 GPixels/s, according to Julia Tilles, a Ph.D student member of the team.
"We're limited to roughly 100,000 fps because moving to a higher frame rate would make our field of view just too small. At higher frame rates and lower resolution, a lightning channel comes into and out of the frame so fast that we just wouldn't get a lot of information and would have a much lower chance of capturing something in the field of view," she said. "The camera's maximum FPS can be as high as 570,000 fps, but pushing the camera to perform at that rate doesn't give us a good time-resolution to spatial-resolution trade-off. Still, we are experimenting with shooting at slightly higher frame rates."
Data captured by the camera enables the team to examine electric field measurements and deduce the corresponding orientation of the channel and the direction of the current.
"The v1210 is an incredibly sophisticated camera. When we shoot between 7,000 fps to 12,000 fps, we're able to see some of the finer details of a lightning flash, such as branching and leader propagation. This resolution is high enough for us to see many elusive processes taking place below the cloud, and it gives us a nice, full picture. We can also use other data sets, such as the National Charge-Moment Change Network (CMCN), to quantify charge moved during a lightning strike to ground," said Tilles.
Along with the Vision Research camera, the team uses technology such as LMA data, NEXRAD radar data, X-ray data, electric field data, charge-moment-change data, and NLDN data to further evaluate the videos they capture.
Hack highlights vulnerability in connected devices
A recent hacking incident involving as many as one million Chinese security cameras and digital video recorders highlights the fact that internet-connected cameras-without proper safeguarding-face the potential of being compromised.
Attackers used Chinese-made, consumer security cameras, digital video recorders, and other devices to generate webpage requests and data that knocked various targets offline, according to a Wall Street Journal article. Among the various applications and markets within the purview of Vision Systems Design coverage, the vulnerability of security and surveillance cameras comes to mind.
Tim Matthews, vice president of marketing for the Incapsula product line at Imperva (Redwood Shores, CA, USA; www.imperva.com)-a company that specializes in web security and mitigating DDoS attacks-notes that last year, his company revealed major vulnerabilities in CCTV cameras as a result of not taking the proper steps to protect against threats.
"Last year, the Imperva research revealed that CCTV cameras in popular destinations, like shopping malls, were being turned into botnets by cybercriminals, as a result of camera operators taking a lax approach to security and failing to change default passwords on the devices," he said. "CCTV cameras are among the most common Internet-of-Things (IoT) devices and Imperva first warned about CCTV botnets in March 2014 when it became aware of a steep 240% increase in botnet activity on its network, much of it traced back to compromised CCTV cameras."
He continued," As we now know, these attacks are happening more often, and millions of CCTV cameras have already been compromised. Whether it be a router, a Wi-Fi access point or a CCTV camera, default factory credentials are only there to be changed upon installation. Imperva recommends following this security protocol of changing default passwords on devices."
Tim Erlin, senior director of IT security and risk strategy at cyber security company Tripwire (Portland, OR, USA; www.tripwire.com) echoes this sentiment, and notes that in order to use network-connected cameras, regardless of the application, companies should be taking precautionary measures.
"The use of network connected cameras in a recent large scale Distributed Denial of Service (DDoS) attack is a clear example of how a seemingly innocuous connected device might be used for malicious purposes," he said. "Security researchers have been demonstrating attacks against IP cameras for a long time."
"Preventing attacks against connected devices," he added, "requires effort from both the industry and users. Vendors need to adhere to best practices for built-in security measures, including secure remote access, basic encryption, and patching known vulnerabilities. These systems can't be deployed without consideration for future security updates, ideally automated updates."
Consumers should also be mindful of potential threats. Deploy systems with security in mind. Change default credentials for access. Put adequate access control in place because attackers will find open and accessible systems if they're available.
Most major companies, organizations, and so on; likely go to great lengths to protect themselves against such an attack. But for those that do not, these examples serve as a lesson that being proactive can pay off in the long run.
Ford to launch fully autonomous vehicle by 2021
With an eye toward launching a fully autonomous vehicle for ride sharing by 2021, Ford Motor Company (Dearborn, MI, USA; www.ford.com) has made a number of key investments in tech companies and has doubled its team in Silicon Valley.
The company's intent is to have a high-volume, fully autonomous Society of Automotive Engineers level 4-capable vehicle in commercial operation in 2021 in a ride-hailing or ride-sharing service. To achieve its goals, Ford has made a number of key investments in tech companies, expanding its advanced algorithm, 3D mapping, LIDAR, and sensor capabilities. These investments include:
Velodyne (Morgan Hill, CA, USA; www.velodynelidar.com): As previously covered on our site (http://bit.ly/2efKyRr), Ford has invested in Velodyne, the leader in LIDAR sensors, with an eye on quickly mass producing a more affordable automotive LIDAR sensor.
SAIPS (Tel Aviv, Israel; www.saips.co.il): Ford has acquired the Israel-based computer vision and machine learning company to further strengthen its artificial intelligence and computer vision capabilities. SAIPS develops algorithms for image and video processing, deep learning, signal processing, and classification, which Ford hopes will help its autonomous vehicles to learn and adapt to the surroundings of their environment.
Nirenberg Neuroscience LLC (New York, NY, USA; www.nirenbergneuroscience.com): Ford announced an exclusive licensing agreement with Nirenberg Neuroscience, a machine vision company founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code the eye uses to transmit visual information to the brain. Nirenberg Neuroscience has a machine vision platform for performing navigation, object recognition, facial recognition and other functions.
Civil Maps (Albany, CA, USA; www.civilmaps.com): Ford has invested in Civil Maps, a company that has developed a scalable 3D mapping technique, which provides Ford with another way to develop high-resolution 3D maps of autonomous vehicle environments.
Ford has also added two new buildings and 150,000 square feet of work and lab space adjacent to its current Research and Innovation Center in Silicon Valley, creating a dedicated, expanded campus in Palo Alto, with plans to double the size of the Palo Alto team by the end of 2017.
"The next decade will be defined by automation of the automobile, and we see autonomous vehicles as having as significant an impact on society as Ford's moving assembly line did 100 years ago," said Mark Fields, Ford president and CEO. "We're dedicated to putting on the road an autonomous vehicle that can improve safety and solve social and environmental challenges for millions of people - not just those who can afford luxury vehicles."
In 2016, Ford will triple its autonomous vehicle test fleet to be the largest test fleet of any automaker, bringing the number to about 30 self-driving Fusion Hybrid sedans on the roads in California, Arizona and Michigan, with plans to triple it again next year.
When it comes to the race to get fully autonomous vehicles on the road, the field is certainly becoming increasingly interesting with strategic moves such as this one, but they are certainly not alone in their pursuit, as many other companies, including Google (Mountain View, CA, USA; www.google.com), Uber (San Francisco, CA, USA; www.uber.com), Tesla (Palo Alto, CA, USA; www.tesla.com), BMW (Munich, Germany; www.bmw.com), Intel (Santa Clara, CA, USA; www.intel.com), Nissan (Yokohama, Japan; www.nissan.com), NASA (Washington, D.C.; www.nasa.gov), are all working toward the same goal. With so much focus being put on the technology, it seems that before long, the roads could be filled with driverless cars sooner than some thought.
Security cameras embed deep neural network processing
Embedded vision company Movidius (San Mateo, CA, USA; www.movidius.com) has announced a partnership with the world's largest IP security camera provider, Hikvision (Zhejiang, China; www.hikvision.com), to bring deep neural network technology to the company's cameras to perform much higher accuracy video analytics locally.
As part of the deal, Hikvision's cameras will be powered by the Movidius Myriad 2 vision processing unit (VPU). Myriad 2 features a configuration of 12 programmable vector cores, which allows users to implement custom algorithms. The VPU offers TeraFLOPS (trillions of floating point operations per second) of performance within a 1 Watt power envelope. It features a built-in image signal processor and hardware accelerators, and offloads all vision-related tasks from a device's CPU and GPU.
Traditionally, notes Movidius, running deep neural networks requires devices to depend on additional compute in the cloud, but the Myriad 2 VPU is a low-power device that enables the running of advanced algorithms inside the cameras themselves. This includes such tasks as car model classification, intruder detection, suspicious baggage alert, and seatbelt detection.
"Advances in artificial intelligence are revolutionizing the way we think about personal and public security" says Movidius CEO, Remi El-Ouazzane "The ability to automatically process video in real-time to detect anomalies will have a large impact on the way cities infrastructure are being used. We're delighted to partner with Hikvision to deploy smarter camera networks and contribute to creating safer communities, better transit hubs and more efficient business operations."
By utilizing deep neural networks and stereo 3D sensing, Hikvision has been able to achieve up to 99% accuracy in their advanced visual analytics applications, including those mentioned above.
"There are huge amounts of gains to be made when it comes to neural networks and intelligent camera systems" says Hikvision CEO, Hu Yangzhong. "With the Myriad 2 VPU we're able to make our analytics offerings much more accurate, flagging more events that require a response, while reducing false alarms. Embedded, native intelligence is a major step towards smart, safe and efficiently run cities. We will build a long term partnership with Movidius and its VPU roadmap."
In September, Intel announced plans to acquire Movidius, with the deal expected to close this year. Movidius has also collaborated with DJI (http://bit.ly/2f9NU7G, Shenzhen, China; www.dji.com), FLIR (http://bit.ly/2eyAPE2, Wilsonville, OR, USA; www.flir.com) Google (http://bit.ly/2f1tOgx, Mountain View, CA, USA; www.google.com) and Lenovo (http://bit.ly/2eEOHdZ, Morrisville, NC, USA; www.lenovo.com), among others.