Cameras and Accessories

Off-the-shelf software targets motion reconstruction

To increase the field of view (FOV) in its Elegra system, Siemens Medical Systems Ultrasound Group (Issaquah, WA) uses a proprietary algorithm that registers sequential image frames, estimates transducer motion, and constructs a panoramic view of the extended FOV in real time (see Vision Systems Design, April 1997, p. 28).
Dec. 1, 2000
3 min read

To increase the field of view (FOV) in its Elegra system, Siemens Medical Systems Ultrasound Group (Issaquah, WA) uses a proprietary algorithm that registers sequential image frames, estimates transducer motion, and constructs a panoramic view of the extended FOV in real time (see Vision Systems Design, April 1997, p. 28).

Click here to enlarge image

For every new image acquired, pixels in the previous images are compared with the new image, and the relative motion of each pixel is calculated from the previous image to its best matching point in the new image. The translations computed for each pixel in the previous images or motion vectors are used to translate and rotate the new image accordingly. While such imaging algorithms have remained proprietary and hand-coded in embedded digital-signal processors (DSPs), the concept of prediction of motion using vector estimation is not new and has remained the subject of many university research projects for more than two decades.

Click here to enlarge image

To commercialize this technology, startup DynaPel Systems (New York, NY) is taking advantage of the dramatic improvements in PC-based processing improvements over the last five years with a commercial package called MediaMend. Based on work originally performed by Harold Maartens at the Norwegian University of Science and Technology (NTNU; Trondheim, Norway), DynaPel's core technology—called the PelKinetics Engine—analyzes the motion between frames of video. Motion information is extracted between frames and is represented as an array of pixel-by-pixel vectors called motion vector fields (MVFs).

Click here to enlarge image

Motion vector fields represent the motion of a tennis player swinging his racket. In the player's upper body, where the most motion occurs, the longer length of the arrow representing the vector shows that more motion is occurring between frames. The background, however, appears to be mostly made up of dots, representing no motion.

After MVFs are identified, they are further processed using statistical computations on the detailed motion information to identify and separate the motion information into two categories representing camera and object motion. Whereas camera-motion information describes the panning, zooming, and rolling of the camera, object motion includes such components as a person in the foreground or a vehicle in the background of the image series.

The set of information represented by the dense motion field, camera, and object motion has enabled DynaPel Systems to develop MediaMend. In operation, MediaMend uses the information generated by the PelKinetics engine along with the original video to mathematically synthesize new frames of video in between the existing frames. The key to this technique, known as video interpolation, is to determine the proper camera motion and location of objects in the newly interpolated frame.

According to Steven Edelson, chief executive officer of DynaPel, the software is currently being bundled with a number of consumer applications to smooth shaky video streams, convert frame rates, and correct pan-and-zoom errors. Interest has also been shown by a (MRI) equipment manufacturer to convert motion-based MRI images.

Andrew Wilson, Editor,
[email protected]

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!