최근 포토로그


Markerless Tracking Dataset Computer Vision

Scrapped from : http://www.metaio.com/research


Markerless Tracking Dataset

Overview
Unlikedense stereo, optical flow or multi-view stereo, template basedtracking lacks benchmark datasets allowing a fair comparison betweenstate-of-the-art algorithms. Until now, in order to evaluateobjectively and quantitatively the performance and the robustness oftemplate-based tracking algorithms, mainly synthetically generatedimage sequences were used. The evaluation is therefore oftenintrinsically based.

This website accompanies our ISMAR 2009 paper "A Dataset and Evaluation Methodology for Template-based Tracking Algorithms" (bib) in  whichwe describe the process we carried out to perform the acquisition ofreal scene image sequences with very precise and accurate ground truthposes using an industrial camera rigidly mounted on the end-effector ofa high-precision robotic measurement arm. For the acquisition, weconsidered most of the critical parameters that influence the trackingresults such as: the texture richness and the texture repeatability ofthe objects to be tracked, the camera motion and speed, and the changesof the object scale in the images and variations of the lightingconditions over time.
We designed an evaluation scheme for objectdetection and inter-frame tracking algorithms and used the imagesequences to apply this scheme to several state-of-the-art algorithms.The image sequences are freely available for testing, submitting andevaluating new template-based tracking algorithms.

 

How to use it
Belowyou find the datasets we generated until now. Each dataset consists ofa movie, an image of the tracking target, the intrinsics of the cameraused and a file giving ground truth positions for every 250th frame,all movies consist of 1200 frames each. There are five movies pertarget focusing on "Angle", "Range", "Fast Far", "Fast Close" and"Illumination". The movies are encoded with the lossless FFV1 codecfrom the ffmpeg-project (ffmpeg.org), a DirectShow codec is availableat http://ffdshow-tryout.sourceforge.net/. You can use e.g. Virtual Dub http://www.virtualdub.org/ to convert the sequences into still images if you need to.

Thetask now is to detect the target image in the frames of the movie. Allreference targets are 640x480 images. For every 250th frame, we providethe coordinates of four corners that are placed at the pixels (+- 512;+-384), the origin of the tracking target is in its middle (see imageon the right, the white frame represents the 640x480 px target, thereference points given for initialization lie on the diagonal). Allimages have their origin in the upper left corner.

Weoffer to evaluate the results you obtain with your tracking algorithmand send you the results. If you agree, we can additionally publishyour results on the webpage. To evaluate your results against the ground truth we have for every frame, please send an email to research(at)metaio.com where you attach a tabulator-separated log file of your experiments (1 per sequence) formatted like this example.

Weevaluate your log files and then send you the results (example resultsfor SIFT see below on the right). As measure we use the RMS of the fourpixels. A frame is considered successfully tracked if the RMS is below10 px.

For the evaluation results of SIFT, SURF, FERNS and ESM please refer to our paper.

Support:
This work was partially supported by BMBF grant Avilus / 01 IM08001 P.

 

Contact info
For comments and suggestions, feel free to contact research(at)metaio.com




Fingerprint Singular Point Detection Competition Computer Vision

Scrapped from http://paginas.fe.up.pt/~spd2010/



Databases


Training data set

  • Initial training data set (210 images) - download

  • Annotation file for initial training data set - download
    - added on 16th Nov.
    The manual labelling process was based on "E.R.Henry, Classification and Uses of Fingerprints.London: Routledge,1900. pdf

Stick 2 the point

Stick 2 the point

Error using ==> sprintf. Function is not defined for sparse inputs. Programming

scrapped from :  http://www.mathworks.com/support/solutions/en/data/1-20J9XR/index.html?product=ML&solution=1-20J9XR

Problem Description:

When I execute the following commands:
s=sparse(eye(5,5));
dlmwrite('mydata.dat',s);
I receive the following result:
??? Error using ==> sprintf
Function is not defined for sparse inputs.
Error in ==> dlmwrite at 172
str = sprintf(format,m(i,:));
However, the above commands works without errors in MATLAB 6.5.1 (R13SP1) and a file 'mydata.dat' is created.

Solution:

Thisis not a bug. The error message you are receiving is issued by theSPRINTF function, which does not accept sparse matrices as input. Towork around this issue, convert the sparse matrix to a full matrixbefore using DLMWRITE. Here are the steps:
s=sparse(eye(5,5));
dlmwrite('mydata.dat',full(s));



Wolfram Tones Automatic Music Composition

Scrapped from : http://tones.wolfram.com/



WolframTones--an experiment in a new kind of music--made possible by Mathematica and A New Kind of Science
start nowabout WolframTonesmy WolframTones collection
Free Downloads--instantly create unique music, art, ringtones, alerts, & more
New--Mathematica Reinvented--Thousands of innovations, including built-in symbolic sound generation
Share your Tones with your friends on Facebook

click to play
   
(all compositions generated by WolframTones algorithms)

a creation of Wolfram Research Labs
terms of use | contact us | © Wolfram Research
 


O15B를 추억하며

나는 가끔 가다 90년대를 회고하는 방송들을 볼 때마다 놀라는게, 어떻게 015B를 빼 놓고 90년대를 이야기 할 수 있는지 모르겠다.

내 개인적인 얘기라고 해 버리면 할 말은 없지만, 90년대 최고의 뮤지션은 공일오비가 아닐까 한다.. 

사춘기 때 좋아했던 노래가 평생을 가겠지만, 내 젊은 시절에 공일오비의 노래를 들을 수 있어서 얼마나 행운이었는지 모른다.

그들의 새 앨범이 나올 때 마다, 은근히 설레였고.... 그들의 음악은 대부분 나를 만족시켰다.

나도 한 때는 작곡가가 되는게 꿈인 적이 있엇는데, 그건 아마도 공일오비(정석원)의 영향이 컸을 것이다.

지금은 해체되고, 많이 잊혀졌지만... 아마 친구들을 만나서 노래방에 간다면, 제일 많이 부르는 노래가 그들의 노래일 것이다.



그대로 그렇게... Misc.


그시절 그대로 그렇게 돌아갈 수 있다면 얼마나 좋을까?

정말 그 무엇과도 바꿀 수 없는 시간이었구나...

손기정이여....  너무나 아름다웠기에...  이렇게 매일 가슴이 찢어지는구나....

error LNK2001: unresolved external symbol "unsigned int Programming


scrapped from : http://sundararajana.blogspot.com/2008/07/dshow-atl-error.html


DSHOW ATL Error

When I try to compile and run the PlayWnd Example in 2005,
the following link problem appears.

errorLNK2001: unresolved external symbol "unsigned int (__stdcall*ATL::g_pfnGetThreadACP)(void)" (?g_pfnGetThreadACP@ATL@@3P6GIXZA)

Solution:
------------
I found g_pfnGetThreadACP is defined in USES_CONVERSION which is used in W2T and so on.
Include the atlsd.lib ( It contains the USES_CONVERSION definition);

still far from satisfactory Useful Expressions

Both computational costs and accuracy are still far from satisfactory

수행 시간이나 정확도의 모든 측면에서, 아직은 만족스럽지 못한 상태이다.

Mario AI Competition

사실 여유만 있다면 이런 거나 하고 살았으면 재밌겠는데.....

Scrapped from : http://julian.togelius.com/mariocompetition2009/index.php



Mario AI Competition

Sergey Karakovskiy andJulian Togelius

In association with the IEEE Consumer Electronics Society Games Innovation Conference 2009 and with the IEEE Symposium on Computational Intelligence and Games

Deadlines: August 18 (ICE-GIC) and September 3 (CIG)

Overview
Getting started
Advanced options
Rules
Submitting your controller
ICE-GIC league table
CIG league table

Last update: August 12, 2009

Overview

This competition is about learning, or otherwise developing, the best controller (agent) for a version of Super Mario Bros.

The controller's job is to win as many levels (of increasingdifficulty) as possible. Each time step (24 per second in simualatedtime) the controller has to decide what action to take (left, right,jump etc) in response to the environment around Mario.

We are basing the competition on a heavily modified version of the Infinite Mario Brosgame by Markus Persson. That game is an all-Java tribute to theNintendo's seminal Super Mario Bros game, with the added benefit ofendless random level generation. We believe that playing this game wellis a challenge worthy of the best players, the best programmers and thebest learning algorithms alike.

One of the main purposes of this competition is to be able tocompare different controller development methodologies against eachother, both those based on learning techniques such as artificialevolution and those that are completely hand-coded. So we hope to getsubmissions based on evolutionary neural networks, genetic programming,fuzzy logic, temporal difference learning, human ingenuity, hybrids ofthe above, etc. The more the merrier! (And better for science.)

There are cash prizes associated with each phase ofthe competition; USD 500 for the winner of the CIG phase, and USD 200,100 and 50 respectively to the winners of the ICE-GIC phase. At leastone member of the winning team need to be registered and present at therelevant conference to receive the prize money, however it is possibleto win the competition and receive the certificate for this withoutattending the conference.

We welcome feedback on both web page, organization and software.

Videoweb Activities Dataset Computer Vision

Scrapped from : http://vwdata.ee.ucr.edu

surveilance data인데 서명을 보내야지 데이타를 다운로드 받을 수 있다네....

에이 째재하게....  그냥 주지.... 뭐 대단한 거라고...



Videoweb Activities Dataset


The Videoweb Activities Dataset has about 2.5 hours of video dataconsisting of dozens of activities along with annotation. The data isnow available publicly for research. Please download and submit thefollowing release form to gain access to the dataset. Below are somesamples from the data.

Release Form

Login to Videoweb Activities Dataset


Sample Videos:

Sample Scene 1
4 view video of sample courtyard scene.

Sample Scene 2
8 view video of sample courtyard scene.

Sample Scene 3
8 view video of sample courtyard scene.

Sample Scene 4
7 view video of sample intersection scene.





multisensor datasets from project Rawseeds Computer Vision

scrapped from : http://www.rawseeds.org/home/


Home

Theaim of the Rawseeds Project is to build benchmarking tools for roboticsystems. This is done through the publication of a comprehensive,high-quality Benchmarking Toolkit composed of:

  • high-quality multisensor datasets, with associated ground truth;
  • Benchmark Problems based on the datasets;
  • Benchmark Solutions for the problems.

The BPs include quantitative performance metrics that can be applied to the output of the BSs. Therefore, once they are put in the form of BSs, algorithms can be assessed and compared. Interested? Click here to read more about the Toolkit.

Rawseeds’ Benchmarking Toolkit is mainly targeted to the problems oflocalization, mapping and SLAM in robotics; but its use is not limitedto them. It will be freely downloadable from this website, and its elements are currently under development.

*** UPDATE: the datasets are online! ***

You can contribute to Rawseeds! Actually, this is agreat way to give to your own algorithms and results the visibilitythey deserve. It is done by putting them in the form of BenchmarkSolutions (don’t worry, it’s not a lot of work!) and publishing themalong with ours. Want to know more? Take a look here.

Questions? You can read the F.A.Q. to learn more. If you want, you can also contact us. Finally: please, take a look to our forum. It’s the place where Rawseeds users and contributors share their thoughts, discuss, or… have new ideas :-)

Rawseeds is funded by the European Commission within the Sixth Framework Programme, and its activities are currently under way. Read here to know how the project is going.


Daimler Pedestrian Detection Benchmark Dataset Computer Vision


Scrapped from : http://www.science.uva.nl/research/isla/downloads/pedestrians/index.html



2009 년 DB



Download


File

Size

Announcement.txt

3 Kb

Documentation.tar.gz

  49 Kb

LICENSE.txt

1 Kb

Calibration.tar.gz

726 b

GroundTruth.tar.gz

1.2 Mb

TestData_part1.tar.gz

920 Mb

TestData_part2.tar.gz

968 Mb

TestData_part3.tar.gz

930 Mb

TestData_part4.tar.gz

919 Mb

TestData_part5.tar.gz

1.2 Gb

TrainingData.tar.gz

1.2 Gb


Theoriginalauthors would appreciate hearing about other publications thatmake useof the benchmark data set in order to include correspondingreferenceson this website, see contact.




2006 년 DB



Download


File

Size

Description

README_benchmark.txt

   7 Kb

Readme file with details of data set

DC-ped-dataset_base.tar.gz

30 Mb

Base pedestrian classification data set

DC-ped-dataset_add-1.tar.gz

135Mb

Additional non-pedestrian images part 1

DC-ped-dataset_add-2.tar.gz

86 Mb

Additional non-pedestrian images part 2

DC-ped-dataset_add-3.tar.gz

158 Mb

Additional non-pedestrian images part 3


Theoriginalauthors would appreciate hearing about other publications thatmake useof the benchmark data set in order to include correspondingreferenceson this website, see contact.


Natural Image Statistics Computer Vision



Scrapped from : http://www.naturalimagestatistics.net/

Natural Image Statistics

— A probabilistic approach to early computational vision

Aapo Hyvärinen, Jarmo Hurri, and Patrik O. Hoyer

Book published by Springer-Verlag, 2009.

Publisher's book home page


Order from:

Springer  Amazon.co.uk Amazon.com

Downloads:

Full preprint version in pdf, 487 pages, 9 MB (Feb 2009 version)

Matlab code and image data for reproducing most experiments, 1.5 MB


From the preface:

Thisbook is both an introductory textbook and a research monographonmodelling the statistical structure of natural images. In verysimpleterms, ``natural images'' are photographs of the typicalenvironmentwhere we live. In this book, their statistical structure isdescribedusing a number of statistical models whose parameters areestimatedfrom image samples.

Our main motivation for exploringnatural image statistics iscomputational modelling of biological visualsystems. A theoreticalframework which is gaining more and more supportconsiders theproperties of the visual system to be reflections of thestatisticalstructure of natural images, because of evolutionaryadaptationprocesses. Another motivation for natural image statisticsresearchis in computer science and engineering, where it helps indevelopmentof better image processing and computer vision methods.

Thebook is targeted for advanced undergraduate students, graduatestudentsand researchers in vision science, computationalneuroscience, computervision and image processing. It can also beread as an introduction tothe area by people with a background inmathematical disciplines(mathematics, statistics, theoreticalphysics).Due to themultidisciplinary nature of the subject, the book has beenwritten so asto be accessible to an audience coming from verydifferent backgroundssuch as psychology, computer science, electricalengineering,neurobiology, mathematics, statistics andphysics.



Jackie Zhu Computer Vision


Triple Misc.

This TV show is very addictive.  In fact, every Wed. and Thu. I found myself sitting in front of TV set at night.  Actually, I don't like this kind of setting, that is, characters in professional field.  I prefer the story of ordinary people and everyday life.  Somehow this show proved my prejudice is wrong.
I like the way it is proceeded by the naration of Haroo(the girl figure skater).  However, the character of Hyuntai(the guy having one-sided love to Sooin(the Haroo's skate trainer and Whal's wife) ) is not understandable.  He is too much cool.  Actually Whal is too cool too.  His behavior is the most unrealistic part of the show.  No body can't do that (not giving up a lady who is found to be the best friend's better half) unless he is crazy.  Why doesn't he leave her even saying "I am leaving you because my love for you is so huge that I can not see you in pain due to that huge love".  I hope the story writer not to try to make it more tense any more.
Oh I almost forgot to say Sooin is so beautiful.
Embarrassed enough to say "How am I supposed to wait until next week?"




references therein


See web notes and references therein

web note와 그 안의 reference들을 보기 바란다.

your name will go down in history Useful Expressions


if you can find an optimal solution in polynomial time yourname will go down in history forever

당신이 만약 polynomial time안에 최적의 solution을 찾으면, 당신의 이름은 역사에 회자 될 것이다.

MS Bing

When I saw the search results, it reminded me of the movie "Matrix". 

I felt like I had been bred by a huge matrix named as Google.

Another metaphor : Truman Burbank in the movie "Truman Show".

Anyway, I realized that the results can be much different depending on the search engine algorithm.

This is why the search engine site is important, because some malicious intention of a search engine can block some information to me (maybe until the end of my life).

Right now I can not tell which one is better.  But somehow Bing was a new experience.

거북이 달린다






재미있었다.

보통 영화는 전반부에서 재미있다가 후반부에서 힘이 빠지면서 헛점이 드러나는데, 이 영화는 반대로 후반부가 더 재미있었다.
 
전반부는 상대적으로 약간 지루한 느낌도....  2시간 정도 되는 영화인데, 당신은 아마 50분정도 지날 때 쯤 시계를 한 번 보게 될지도 모르겠지만, 그 이후 부터 끝날 때까지 시계 볼 일은 없을 것이다.

전체적으로 <추격자> 냄새가 좀 난다.  김윤석 친구들로 나오는 사람들이 주는 코믹적인 요소가 자칫 비현실적이고 억지스러울 수 있는데, 다른 부분의 디테일들이 잘 막아주고 있다.

김윤석 연기는 내가 <천하장사 마돈나>때 부터 감탄을 하던 터라 두말하면 잔소리고, 견미리 연기도 좋았다.

sino Useful Expressions

Sino- 「중국(과의)」의 뜻.

Caltech Pedestrian Dataset Computer Vision

헉 5기가가 넘는다...  그렇지만 둏은 자료군....



scrapped from : http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/



Computational Vision at Caltech     |     MIS at TU Darmstadt

Caltech Pedestrian Dataset

peds01peds02peds03peds04

Description

The Caltech Pedestrian Dataset consists of approximately 10 hours of640x480 30Hz video taken from a vehicle driving through regular trafficin an urban environment. About 250,000 frames (in 137 approximatelyminute long segments) with a total of 350,000 bounding boxes and 2300unique pedestrians were annotated. The annotation includes temporalcorrespondence between bounding boxes and detailed occlusion labels.More information can be found in our CVPR09 paper.

Download

  • The training data for the Caltech Pedestrian Dataset is available here.There are six training sets (~1GB each), each consisting of 6-13 minutelong seq files, see the paper for more details. Detection results forall evaluated algorithms are also provided.
  • We are not releasing the testing data, please see "submitting results" below for information on how to include your trained pedestrian detector in the evaluation.
  • All videos are encoded using the seqfile format. An seq file is a series of concatenated image frames witha fixed size header, Matlab routines for reading/writing/manipulatingseq files can be found in Piotr's Matlab Toolbox (version 2.30 or later).
  • Associated Matlab code is available here.The annotations use a custom "video bounding box" (vbb) file format.The code also contains utilities to view seq files with the annotationsoverlayed, evaluation routines used to generate all the ROC plots inthe paper, and also the vbb labeling tool used to create the dataset (aslightly outdated video tutorial of the labeler is also available).
  • To allow for the exact reproduction of the INRIA ROC plots for full images, for convenience we are also posting the INRIA pedestrian full images/annotations in seq/vbb format as well as detection results for all evaluated algorithms.

Benchmark Results

Algorithm details and references can be found here. Note: some of the results below vary slightly from those in the CVPR09 paper due to simplified handling of ignore regions.
  1. Caltech Pedestrian Testing Dataset: All results inour CVPR09 paper were reported on this data (the data is not availablefor download, see submitting results for details). Results on 50-pixelor taller, unoccluded or partially occluded pedestrians are shown here, a more detailed breakdown of performance, as in the paper, can be found here.
  2. Caltech Pedestrian Training Dataset: Results on thetraining data (which is available for download). These results areprovided so researchers can compare their method without submitting aclassifier for full evaluation. Results on 50-pixel or taller,unoccluded or partially occluded pedestrians are shown here, a more detailed breakdown of performance can be found here.
  3. Caltech Pedestrian Japan Dataset: Similar to theCaltech Pedestrian Dataset (both in magnitude and annotation), exceptvideo was collected in Japan. We cannot release this data, however, wewill benchmark results to give a secondary evaluation of variousdetectors. Results on 50-pixel or taller, unoccluded or partiallyoccluded pedestrians are shown here, a more detailed breakdown of performance can be found here.
  4. INRIA Pedestrian Test Dataset: Results on the INRIA pedestrian full image data, obtained using the 288 positive test images (details given here). The ROC on the full image results is available here.
Last updated May 31, 2009.

Submitting Results

We are not releasing the test data at this time. Instead we askauthors to submit final, trained classifiers which we shall proceed toevaluate. We understand that this means additional effort for everyoneinvolved; however, our aim is to help prevent overfitting and to extendthe dataset's lifespan. Furthermore, it ensures that all algorithms areevaluated in precisely the same manner. Input/Output format: we areflexible since whatever IO format used, we simply write a wrapperfunction that allows for running the algorithm in a distributed manner.The only requirement is that the algorithm take in an image and returna bounding box and a score for each detection (as a matrix, text file,etc). The algorithm should perform multi-scale detection, detectingpedestrians at least 100 pixels tall (the returned detected boundingboxes can have additional padding) and performing any necessarynon-maximal suppression (nms). If need be nms and fast resampling code can be found in Piotr's Matlab Toolbox.Linux 32 or 64 bit binaries or Matlab code are ideal, Windows 32 bitbinaries are acceptable if need be. Finally, the algorithm should takea total of at most about 1 minute per 640x480 image (on a reasonablesingle core machine), with faster times being highly preferred, andmust be able to handle images as large as 1280x960. For algorithms thatutilize motion information, the input to the algorithm can be a pair ortriplet of images. For more sophisticated methods that require use ofthe entire video, we ask researchers to write routines that directlyutilize the seq files as input (using the provided seq support code).Please contact us if the above IO format is too restrictive for yourneeds.

Related Datasets

Below we list other pedestrian datasets,roughly in order of relevance and similarity to the Caltech Pedestriandataset. A more detailed comparison of the datasets (except the firsttwo) can be found in the paper.
  • Daimler:Also captured in an urban setting, update of the older DaimlerChryslerdataset. Contains tracking information and a large number of labeledbounding boxes.
  • NICTA:A large scale urban dataset collected in multiple cities/countries. Nomotion/tracking information, but signficant number of uniquepedestrians.
  • ETH: Urban dataset captured from a stereo rig mounted on a stroller.
  • INRIA: Currently one of the most popular static pedestrian detection datasets.
  • PASCAL: Static object dataset with diverse object views and poses.
  • USC: A number of fairly small pedestrian datasets taken largely from surveillance video.
  • CVC: A fairly small scale urban pedestrian dataset.
  • MIT: One of the first pedestrian datasets, fairly small and relatively well solved at this point.


programming language Programming

진짜 PL은 랭귀지다.

뭐...당연한 얘긴가?

안쓰면 까먹고, 열라게 집중해서 하면 조금 늘고....  어쩌면 그렇게 어학연수 때 느끼는 거랑 똑 같은지 모르겠다.

언어는 상대방이 있어야지 빨리 느는데, PL을 익힐 때 상대방은 뉴구?

RJMCMC Tutorial Machine Learning

Scrapped from : http://cvlab.epfl.ch/~ksmith/tutorial/rjmcmc.php

플래시 애니메이션까지?  부지런한 녀석이군....

RJMCMC Tutorial

     

What is this Tutorial About?




Thegoal of this tutorial is to provide an understanding of theReversible-Jump Markov Chain Monte Carlo (RJMCMC, also known astrans-dimensional MCMC) applied to multi-object video tracking. RJMCMC is a method of approximate inference for a Dynamic Bayesian Network(DBN), which is a probabilistic method for modeling dependencies,applied here to model the problem of tracking. I have tried to presentthe RJMCMC approach in clear and simple terms in this tutorial, withthe aid of some graphics and animations to illustrate some of the moredifficult concepts. I have also tried to give a general algorithmicdescription of the RJMCMC approach, so that the reader is free toimplement it in the style and programming language of his or her choice.


© 2009 Kevin Smith. All rights reserved.



NEC Animal Dataset Machine Learning

Scrapped from : http://ml.nec-labs.com/download/data/videoembed/


NEC Animal Dataset

Description

The NEC Animal dataset consists of a sequence of about 5000 images from60 toy animals taken at different poses. The pose changes continuouslyby putting each toy on a turntable. About 72 images are available perobject. This dataset can be useful for performance evaluation of poseinvariant object recognition or object categorization methods.

Download

Downloading and using this dataset is free for research purposes. The original sequence (316 MB) contains frames where an operator swaps successive animals on the turntable. The clean sequence (281 MB) is the same but with swapping frames eliminated. Recognition results using this dataset are reported in the following paper:

Hossein Mobahi, Ronan Collobert, Jason Weston. Deep Learning from Temporal Coherence in Video, International Conference on Machine Learning (ICML'09), Montreal, Canada, June 2009.



New Extensive Pose Estimation Dataset

ICRA는 이런 걸로도 붙는구나...

Scrapped from : http://www.cvl.isy.liu.se/research/objrec/posedb/


Object Pose Estimation Database

This database contains 16 objects, each sampled at 5oangle increments along two rotational axes. All objects are availableboth with a black background, and with a cluttered background. Some ofthe objects are available in different lighting conditions (left,right, ambient). The montage below shows one view each for the objects.

See image below for an illustration of the available views.

Take me to the datasets.

Associated publication

If you use this database in a publication, you should reference the paper:
F. Viksten, P.-E. Forssén, B. Johansson, and A. Moe. Comparison of Local Image Descriptors for Full 6 Degree-of-Freedom Pose Estimation. IEEE International Conference on Robotics and Automation, May 2009. [BibTeX].


If you use this dataset and want to have your publication listed here, please drop us a note at .

SVEN: Surveillance Video Entertainment Network Computer Vision

Scrapped from http://deprogramming.us/sven/index.html


SVEN: Surveillance Video Entertainment Network
aka "AI to the People"

by Amy Alexander, Wojciech Kosma, Vincent Rabaud
with Nikhil Rasiwasia and Jesse Gilbert
Production Assistants: Marilia Maschion, Annina Rüst, Cristyn Magnus

The project that asks the question: If computer vision technology can be used to detect when you look like a terrorist, criminal, or other "undesirable" - why not when you look like a rock star?

 

 
SVEN in original van performance configuration

SVEN in three-monitor installation configuration

 

SVEN(Surveillance Video Entertainment Network) is a system comprised of acamera, monitor, and two computers that can be set up in public places- especially in situations where a CCTV monitor might be expected. Thesoftware consists of a custom computer vision application that trackspedestrians and detects their characteristics, and a real-time videoprocessing application that receives this information and uses it togenerate music-video like visuals from the live camera feed. Theresulting video and audio are displayed on a monitor in the publicspace, interrupting the standard security camera type display each timea potential rock star is detected. The idea is to humorously examineand demystify concerns about surveillance and computer systems not interms of being watched, but in terms of how the watching is being done- and how else it might be done if other people were at the wheel..


There'salso the other side of the SVEN coin: when do rock stars look like you?We noticed that music video cinematography and editing often resemblessurveillance footage. So in the spirit of reality TV, we programmedSVEN's cinematography algorithms to make surveillance music videoslive...



OpenVIDIA Computer Vision

Scrapped from : http://openvidia.sourceforge.net/index.php/OpenVIDIA

OpenVIDIA: Parallel GPU Computer Vision

What is OpenVIDIA?

OpenVIDIA projects implement computer vision algorithms on computergraphics hardware, using OpenGL, Cg and CUDA. The project provides useful exampleprograms which run real time computer vision algorithms on single or parallelgraphics processing units(GPUs).

Currently OpenVIDIA consists of CVWB (Computer Vision Workbench), aWindows application that runs common image processing routines.Additionally, there are a few "cores" containing algorithms below.

OpenVIDIA projects utilize the computational power of the GPU to provide real--timecomputer vision and imaging much faster than the CPU is capable of while offloading the CPUto allow it to conduct concurrent tasks.

This project was founded at the Eyetap Personal Imaging Lab (ePi Lab) at the Electrical and Computer Engineering Group at the University of Toronto. It has been expanded to include contributions from many sources in academia and industry.



VLFeat Computer Vision

Scrapped from : http://www.vlfeat.org/index.html



The VLFeat open source library implements popular computer vision algorithms including SIFT, MSER, k-means, hierarchical k-means, agglomerative information bottleneck, and quick shift. It is written in C for efficiency and compatibility, with interfaces in MATLAB for ease of use, and detailed documentation throughout. It supports Windows, Mac OS X, and Linux. The latest version of VLFeat is 0.9.4.

Help

Reference

Tutorials

  • SIFT – Scale Invariant Feature Transform
  • MSER – Maximally Stable Extremal Regions
  • IKM – Integer optimized k-means
  • HIKM – Hierarchical k-means
  • AIB – Agglomerative Information Bottleneck
  • Quick shift – Quick shift mode seeking
  • More ...

BibTeX entry

@misc{vedaldi08vlfeat,
Author = {A. Vedaldi and B. Fulkerson},
Title = {{VLFeat}: An Open and Portable Library
of Computer Vision Algorithms},
Year = {2008},
Howpublished = {\url{http://www.vlfeat.org/}}

Acknowledgments

Part of this work was supported by the UCLA Vision Lab and the Oxford VGG Lab. The authors would like to thank the many colleagues that have contributed to VLFeat by testing and providing helpful suggestions and comments.



Multimedia Grand Challenge 2009 Computer Vision

Scrapped from : http://www.scils.rutgers.edu/conferences/mmchallenge


ACM MM 2009 Header Image

Yahoo! Challenge:

Radvision Challenge:

CeWe Challenge:

Google Challenge:

HP Challenge:

Nokia Challenge:


Radvision Challenge:

Yahoo! Challenge:

Accenture Challenge:

CurrentTV Challenge:



1 2 3 4 5 6 7 8 9 10 다음