Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software

Algorithms for Motion Tracking? 32

Keith Handy asks: "I seem to be unable to find algorithms and/or open source programs that will do accurate motion tracking, i.e. you mark a point on an object in frame 36, and the program can follow that point on that object through all the frames following it. This is useful not just for analyzing motion, but also for interpolating/extrapolating frames of video -- so if you had something at only 15 fps, you could generate inbetween frames (which are not just crossfades between the frames) and actually smooth the effect of the motion. Not something so complicated as to get into actual physics -- just something that will indicate where (in 2D only) that part of the object has moved from one frame to the next, for any given point in the whole picture. And for that matter it doesn't have to be 100% accurate, just any means of generating a reasonable motion-flow map." This doesn't strike me as an easy algorithm to develop, but are there any papers online or offline, that might describe an algorithm that can at least track objects in an image?

"In other words, I want something that does this, in order to write code that will do things like this and this. I already know how to write code to blur and warp images, so to be able to track motion would give me (and you) the same capabilities as these expensive plug-ins.

Anyone know any other resources, directions, or existing code I could look into to find out more about how this works, so I can incorporate it into my own programming instead of paying hundreds or thousands of dollars for limited, proprietary use of the technology?"

This discussion has been archived. No new comments can be posted.

Algorithms for Motion Tracking?

Comments Filter:
  • Book (Score:5, Informative)

    by rm-r ( 115254 ) on Tuesday February 05, 2002 @07:30AM (#2954521) Homepage
    Hmm, not sure about any resources on the net, but I did similar stuff at uni so I can recommend a book. Try Image processing, analysis and machine vision by hlavac et al [amazon.com]. It's a very good book with plenty of code-neutral algorithms. Good luck.
    • I'm just tossing in a second for that book - FWIW, I wrote exactly what you're looking for (the intelligent interpolation of frames) in a class that used this book (or the previous edition) as the core text. Good book, good class. If I had the code, I'd toss it to you. Incidently, the frames I used were every other from a sequence of a Star Trek episode (IIRC, it was found via Archie on an MIT ftp site)... which allowed me to go back and compare with the actul frames that I had pulled out and see where the errors were. As a result, I also wrote a few nice vdiff image programs - I'm sure a fm: search will now pull a dozen versions up, but the technique I just described is useful to refine your processing code.

      --
      Evan "Tired, and I'm not gonna rewrite that for easier parsing" E.

  • Video compression (Score:3, Informative)

    by cperciva ( 102828 ) on Tuesday February 05, 2002 @07:36AM (#2954528) Homepage
    I think the best use of this would be in video compression -- if you can recognize the movement of objects between frames, you can encode how much things have moved instead of re-encoding the entire image.

    Which is exactly what MPEG does... very crudely. The MPEG solution seems to be to compare a block (8x8?) of pixels with every block in the previous frame.

    The fact that MPEG doesn't use anything more sophisticated than this suggests to me that there probably aren't any algorithms which consistently work better.
    • Re:Video compression (Score:2, Informative)

      by rm-r ( 115254 )
      Motion tracking is also useful in military software, ie to find that pesky moving Afghan and point a gun at him. Similar technology is also used in some CCTV systems, but apart from that it doesn't seem to have crossed over into the mainstream much. There's a lot of cool software and ideas out there, much of it in the public domain if not GPL'd exactly so I'm sure it's just waiting for someone with a really good idea...
    • The fact that MPEG doesn't use anything more sophisticated than this suggests to me that there probably aren't any algorithms which consistently work better.

      Not that I know, but it could also be that there aren't any algorithms that work better considering the horsepower available in many devices. There could be many algorithms that work much better assuming a dual Athlon 1900+'s to execute it.
    • i'm sure that we will see something like this in the future, but even right now with our 2ghz machines, point tracking is pretty slow.

      download a trial copy of shake [nothingreal.com] and try some tracking for yourself. just trying to follow 4 points takes about 1 second per frame, imagine how long it would take to process every pixel (or even 8x8 blocks) in a 30 minute video!

  • openCV (Score:5, Informative)

    by bjpirt ( 251795 ) on Tuesday February 05, 2002 @08:17AM (#2954603)
    have a look at openCV (stands for open computer vision), it was originally developed by intel, but was later open souced. runs on both linux and windows and is mainly used for real time motion tracking of live video sources. i'm sure there are some pretty nice algorithms in the source there somewhere. They have their stuff on sourceforge [sourceforge.net]

    and a yahoo groups support forum thing here [yahoo.com]

    the original intel pages are here [intel.com]
    cheers,
    bjpirt
  • by dario_moreno ( 263767 ) on Tuesday February 05, 2002 @08:22AM (#2954612) Journal
    Compute the 2D FFT of each frame (in grayscale), then get
    the intercorrelation function of two neighbouring
    frames. The maxima are more or less where
    the objects have moved.

    I only used this method on artificially generated
    frames, ie 1 frame with translation and noise
    added. Still, the intercorrelation sinks quite
    fast. On natural images, there must be a lot
    of fiddling to do.
  • using a marker of a known colour (e.g. yellow), then read the raw video stream (scan line at a time) looking for the largest instance of your colour (e.g. the longest instance of yellow on a given scan line), and note where you started from on that scan line and the length of the line. From this you can work out the middle of the marker. This should give you the X, Y coords w.r.t. the camera position.

    CJC
  • KLT Feature Tracker (Score:5, Informative)

    by The Whinger ( 255233 ) on Tuesday February 05, 2002 @08:27AM (#2954624) Homepage
    Have a look at the KLT tracker - that will probably do what you want.

    An implementation can be found here:

    http://vision.stanford.edu/~birch/klt/
  • One way might be to identify "features" in the image e.g. by colour, brightness or changes and build an association tree.

    Basically, identify all "peaks" (whatever feature you're interested in) and sort them. Start with the most outstanding feature and associate its nearest neighbours with it. Repeat many times. You will have data structure of references which will produce a map of islands and isthmuses depending on how far down you look.

    Attach a "label" (unique ID) to each significant feature in the frame.

    Repeat for the next frame.
    Compare significant features. Using some sort of threshold, you can attach a confidence level that you're looking at the safe feature in the previous frame.

    That's a simplistic overview, but I did it many years ago for looking at the output of stellar formation simulations.
  • tracking motion (Score:3, Interesting)

    by Anonymous Coward on Tuesday February 05, 2002 @10:04AM (#2954994)
    i don't know if this would suit your needs, but a package called "motion" has been available for quite some time which in fact is oriented to tracking frame differences from a video source:

    http://motion.technolust.cx [technolust.cx]

    there are some examples and a sample video which demonstrate tracking "motion."

  • http://motion.technolust.cx/

    "Motion uses a video4linux device and detects changes in the image. If a change is detected a snapshot will be taken. "
  • If your goal is merely frame interpolation, I suspect that using a decent MPEG2 encoder and interpolating based on the motion compensation would be good enough.

    For other applications (e.g., colorization), you need somewhat better segmentation. Doing this well in the general case is still a research topic; but that's good: you can get lots of research software from around the net that does this sort of thing. Look for keywords like "computer vision", "motion", "segmentation", and "tracking" on Google.

  • why don't you try this far-fetched possibility:

    break up the iimage into N x N submatrices, and do a fourier transform on each subsection of the image. then do this for the next frame, and calculate the phase differences between each frame, and use linear/cubic/etc interpolation to generate the frames in between. not too difficult, and I think there is even a 2-D FFT library located somwhere on download.com. this, however might introduce a couple of artifacts, but if you're doing high framerate video, it shouldn't be too noticeable.

    or even more far-fetched:
    assuming that the translation of the objects in the image plane between frames are small and uniform enough, you might also be able to pull this off with a properly trained neural network on subsections of the image (so each individual feature fits approximately in each subsection). neural networks can do non-linear regression, but thier outputs are continuous, so I figure if you train it right, it'll give you what you want.

    good luck :-)
  • by dutky ( 20510 ) on Tuesday February 05, 2002 @01:10PM (#2956145) Homepage Journal
    Other folk have mentioned the MPEG motion compensation algorithm (though I think they got it a bit wrong). The algorithm chops up the current frame into 8x8 pixel block on even block sized boundries (first block at (0,0):(7,7), second block at (0,8):(7,15), and so on). These blocks are then compared against all possible 8x8 pixel blocks, local to the original block, in the adjacent frame (we compare against the blocks shifted by 1, 2, 3, 4, 5, 6, 7, and 8 pixels in both the x and y directions). Essentially, each block is compared against every possible sub block of a 12x12 pixel block centered on the original block's position. The comparisson succeeds if the difference between the two blocks in small enough (this is a threshold that you set).

    Once you have done this for every block in the original frame, you have a set of motion vectors from which you can construct an intermediate frame.

    • Check this:
      http://robotics.stanford.edu/~birch/klt/ [stanford.edu]

      I think that different MPEG compression schemes track motion differently - some using a brute force method. This method treats your image like a linear function so that it can search for the region of interest in the next image by using a "newton's method" like scheme - Much more efficient than brute force pixel comparison. I could be wrong though - I wasn't really paying attention in class
  • I picked up a book a while ago titled "Image Analysis and Processing", and it's part of the "Lecture Notes in Computer Science" series... it contains tons of information on image segmentation and has a few sections dealing specifically with object recognition and motion prediction. You could probably adapt many of the processes in there to suit your needs.

    I picked up this book (and many other computer and math books) at my local Coles bookstore for $2-$5 CDN$ each... I guess they were trying to get rid of them. I don't know if you'll be able to find a copy, but here's the info anyways:

    Lecture Notes in Computer Science
    Volume 1310
    Image Analysis and Processing
    Alberto Del Bimbo (Editor)
    Published by Springer

    ISSN: 0302-9743
    ISBN: 3-540-63507-6

    The editor's email address is listed in the cover page: delbimbo@aguirre.ing.unifi.it, so you might be able to contact him to see where you could find a copy... Good luck!
  • ... of the CGI industry. Many have tried and many have failed. I've known a couple of people who have been involved in development of systems to do this, and I've seen a lot of companies come and go who have promised such systems at industry shows (quite a few have claimed to be defence funded which is a little scary). None that I know of have borrowed too heavily from public algorithms to do this (although one did claim their system generated some splendid comical morphing effects when mis-applied!). There are quite a few commercial systems out there (have a look at highend3d.com's lists etc), but I guess this isn't what you're after. As I understand the state of the art, simply following a feature on an image frame (using fairly simple algorithms) is not a difficult problem in itself. The tricksy part comes when you need to follow a feature (e.g edge / point) which is changing itself during the sequence, or even in the worst case being eclipsed by another feature (person walking in front of camera). Know it doesn't help, and I hate to be negative, but I don't think you're gonna see a sourceforge motion-tracking project any time soon.
  • I found the audio commentry on the SG1 dvd for Small Victories facinating, with how they used lasers as points (they later brushed out some), so they could sync the CGI bugs with the moving camera.

    Also, the BBC have something camera based in the works
    http://www.bbc.co.uk/rd/tour/virtualproduction.h tm l
  • I was considering using my webcam as a motion sensor. I do not know if it will work or not, but I was considering having it sound the alert if the .jpg changes by more than so many bytes.
    That way, if the structure of the picture changes, with more or less pixels of the same color, the .jpg size will change, therefore meaning something has happened.
    Will this work?
    • Yes, it can be done. I have a free-beer Windows program to do that (I don't remember the name, I almost never use it). I dunno if there's anything like it for Linux, but it's certainly possible.
    • checking the size of a jpg file would not work. maybe an uncompressed jpg file would contain the information. A very small change in the original picture could lead to very much change in the resulting bytes of the jpg file.

      but try seaching for webcam motion detector [google.com] on google and you will find some useful stuff.
    • You wouldn't want to do something as crude as looking at the filesize, instead there are some pretty crude techniques for measuring image similarity. One of the simplest, and most common methods for a computer to assess image quality is to calculate the Peak Signal-to-Noise Ratio (PSNR) between the current and previous frame. PSNR is based on a calculation of the Mean Squared Error (MSE) of the luminance differences between equivalent pixels on the old (f) and new images (f').

      MSE= (Sigma[f(i,j)-f'(i,j)]^2)/N^2

      Where N is the number of pixels. PSNR in decibels (dB) is calculated as

      PSNR=20 log10 ((255^2)/MSE)

      A higher PSNR between two images indicates a greater degree of similarity, the PSNR of identical images will be infinity. Although this calculation is a useful way of determining overall similarity between images, it does not necessarily correlate with human judgments particularly well. One crucial limitation of PSNR is that it is a global measure that treats all deviations the same. Therefore, a slight, barely perceptible, uniform degradation over the entire image may result in a PSNR that is identical to that of an image with an obvious, severe degradation in a small, prominent location of the picture.

      Alternatively, you could look into DCTune, a proprietary algorithm developed at NASA. It is based on some principles of the human visual perceptual system, and gives a score that's more well correlated with human judgments
  • Hi.
    Our lab is doing very similar work. We've interpolated frames of video from an 8fps image sequence (taken with a wearable computer) into a smooth 30fps video sequence, using VideoOrbits. Theres a short video example available somewhere on my homepage. Perhaps this would be of interest to you. VideoOrbits is freely available at http://wearcam.org/orbits [wearcam.org].

    Video Orbits runs at over 11 fps on
    a 700 MHz dual processor machine. Its also a featureless tracking algorithm so no point correspondences need to be identified.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...