Biometrics | Task Optimized Vision Systems | Computer Vision/Pattern Recognition | AR/VR | 3D Imaging/Depth Sensing | Medical Screening | Computational Imaging | Imaging System Design

Biometrics

The use of face, fingerprint, palm, iris, and retinal images is growing as a means to identify a person. The use cases range across airport security, access control, law enforcement, single-sign-on, and more recently banking and online commerce. High-quality cameras in mobile-phones make biometrics very accessible. Biometrics systems are made up of functional blocks including the camera, illumination, algorithms, and UX, among others. These systems require ease-of-use while providing high discrimination under a wide-range of range conditions.

Patent:
Iris image capture devices and associated systems
USPTO Patent number: 7652685


Book chapter :
Wavefront Coding for Enhancing the Depth of Field in Less-Constrained Iris Acquisition,
Encyclopedia of Biometric Recognition, Springer-Pegasus, 2008.
P. E. X. Silveira, L. Gao, and R. Narayanswamy


Publications:
Extending the imaging volume for biometric iris recognition,
Applied Optics, February 2005;
Narayanswamy, G.  E. Johnson, P. E. X. Silveira, and H. B. Wach

Iris Recognition at a Distance with Expanded Imaging Volume
Proceedings of SPIE on Defense and Security, Orlando, April 2006
R. Narayanswamy and  P. E. X. Silveira

Extended depth of field iris recognition system for a workstation environment
Proceedings of SPIE on Defense and Security, Orlando, April 2005
Narayanswamy, P. E. X. Silveira, H. Setty, V.P. Pauca and  J. van der Gratch

Iris Recognition with Enhanced Depth of Field Image Acquisition System
Proceedings of SPIE Conference on Defense and Security, Orlando, April 2004.
J.Gracht, P. Pauca, H.Setty, R. Narayanswamy, R. Plemmons, S. Prasad, and T. Torgersen.

Task-optimized Vision Systems

Application-specific processors (ASIC) make use of a priori-knowledge of an application to deliver higher performance for the same size, weight, power footprint. Similarly, a customized imaging system can be designed for specific use cases that deliver superior performances. Select examples in this class are biometrics systems, manufacturing conveyor belt inspection, medical imaging, and box-sizing & sorting for mail-delivery offices.

Patent:
Task-based imaging system
USPTO Patent number: 8760516, 8144208, 7944467


Publications and Presentations: 

Signal-to-Noise Analysis of Task-based Imaging Systems with Defocus,
Applied Optics, May 2006; Feature issue on Task-Specific Sensing and Computational Imaging.
Republished by Optical Society of America on The Virtual Journal of Biometric Optics, Volume 1, Issue 6 – June 13, 2006
P. E. X. Silveira and R. Narayanswamy

Integrated system design for application-specific imaging,
Invited presentation in Optical Society of America’s Annual Meeting, October 2005
E. Dowski, P. E. X. Silveira and R. Narayanswamy

Task-based imaging system optimized for iris recognition
Invited
talk presented at the Biometric Consortium Conference, September 2006  Baltimore, MD
Narayanswamy and P. E. X. Silveira

Design and optimization of the cubic phase pupil for the extension of the depth of field of task-based imaging systems,
Proc. SPIE, vol. 6311, SPIE Symposium on Optics & Photonics, 13-17 August 2006
Bagheri, P. E. X. Silveira, R. Narayanswamy and D. P. Farias

Computer Vision/ Pattern Recognition

The excitement generated by work in autonomous driving and the amazing progress made by deep-learning is making computer vision more accessible every day.  Optical-character-recognition (OCR) is enabling the digital transformation of many industries like banking and insurance. Sports-analytics, like TrueView from Intel, is delivering novel camera views and real-time on the field performance statistics. Satellite/Airborne imagery is increasingly used by a variety of industries including precision-agriculture, insurance, smart-cities planning, and disaster recovery. Medical-diagnosis uses X-ray, mammography, image-based inspection in cytology, and pathology.  Biometrics, robotics, and drones all use computer vision.

Patent:
Method and apparatus for robust shape detection using a hit/miss transform
USPTO Patent number 5790691


Publications and Presentations:

Optoelectronic Region of Interest Detection: An application in automated cytology,
Applied Optics, September 1998
R. Narayanswamy and K. M. Johnson

Optoelectronic Hit/Miss transform for screening cervical smear slides,
Optics Letters, June 1995.
R. Narayanswamy, J. P. Sharpe and K. M. Johnson

Smart Pixel Arrays for Intelligent Image Sensing (Invited presentation),
Optical Society of America Annual Meeting, Long Beach, October 1997 ;
K.M. Johnson and R. Narayanswamy

Liquid crystal-based optical processors for region of interest detection (Invited),
Photonics West, San Jose, February 1997;
K. M. Johnson, Narayanswamy, and J. L. Metz

Intelligent Data Elimination for a rare event application
SPIE 1998 Annual Meeting, San Diego, July 1998;
R. Narayanswamy, J. L. Metz and K. M. Johnson

Analysis of morphological structuring elements generated using adaptive resonance theory;
Nonlinear Image Processing VI, San Jose, Feb 1995, SPIE Proceedings Volume 2424.
P. Sharpe, N. Sungar, R. Narayanswamy and K. M. Johnson

AR VR/ Immersive Experiences

Augment Reality(AR) and Virtual Reality (VR) are enabling new forms of immersive experiences.  AR/VR systems include sensing, camera systems to detect the user environment, graphic engines to render visual content, systems to deliver user’s positional awareness, efficient battery and power management, system design for comfortable wear and to deliver the user-experience without triggering psychophysical issues like motion sickness.

Presentations

Multi-camera systems for AR/VR and depth-sensing (Invited),
Electronic Imaging 2018, Burlingame, CA, January 2018
Ram Narayanswamy and Evan Fletcher

Movements in Digital Imaging (Invited)
Presentation at the University of Colorado’s ATLAS Institute, November 02, 2015
Ram Narayanswamy

Changing the Paradigm of Imaging through Immersive Media Experiences (Invited)
Optical Society of America’s Imaging and Applied Optics Congress, Washington D.C, June 2015
Ram Narayanswamy

Computational Imaging Platforms and Next-Generation User Experiences (Invited)
The Stanford Center for Image System Engineering, Palo Alto, CA – January 2014
Ram Narayanswamy

Next-Generation Visual Media and User Experiences (Invited)
University of Colorado’s Computational Optical Sensing and Imaging – NSF center of excellence, Boulder, CO; November 2013
Ram Narayanswamy

Enabling New Visual User Experiences (Invited)
Optical Society of America’s Imaging and Applied Optics Congress, Washington D.C, July 2013
Ram Narayanswamy

3D  Imaging / Depth Sensing

It is increasingly important to characterize a space in all three dimensions, height, width and depth. Depth sensing is being delivered in a number of ways. Dual-camera or multi-camera systems estimate depth using parallax, as humans do. Active depth ranging sensors using time-of-flight measurement. LIDAR and Radar are also being brought to the market to meet the demands of autonomous navigation for cars, drones, and robots.

Patents:
Simulating multi-camera imaging systems
USPTO Patent number 10119809

Multi-camera sync pulse synchronization
USPTO Patent number  9819875

Multi-camera dataset assembly and management with high precision timestamp requirements
USPTO Patent number  9813783

Synchronized capture of image and non-image sensor
USPTO Patent number  9654672

Patent application: Dynamic Calibration of multi-camera systems using multiple multi-view image frames;
Application Filed on November 14, 2017


Presentations:

Multi-camera dynamic calibration of multi-camera systems,
7th IEEE International Workshop on Computational Cameras and Displays (CCD 2018),  June 2018
A. Kumar, M. Gururaj, S. Kalpana,  R. Narayanswamy

Simulating multi-camera imaging systems for depth estimation, enhanced photography and video effects,
Imaging and Applied Optics Congress, Washington DC, June 2015
G. Grover, R. Nalla, and R. Narayanswamy

Medical Screening Imaging

Screening systems are complementary to human strengths. Machines are good at doing a task over and over again without their performance degrading. Humans excel at tasks that are highly nuanced and need a lot of experience and contextual information.  Medical screening applications like the examination of pap-smear slides and mammograms are great examples where machines and humans can complement each other. If the machine scans entire slides and eliminates all the normal cells, the human-operator can focus on the potential abnormal areas. This has been shown to deliver improved overall results.  Screening systems, like those used in security systems, manufacturing-anomaly-inspection, and mammogram screening can benefit from a partition of labor between machines and humans.

Presentations and Publications:
Development and use of cervical cytology database for design of an automated Pap smear screening system;

44rth Annual Scientific Meeting of the American Society of Cytopathology – Denver, October 1996
R.J. Stewart, R. Narayanswamy, J.L.Metz, and K.M. Johnson

Optoelectronic region of interest detection in cervical smears,
1996 International Topical Meeting on Optical Computing, Sendai, Japan April 1996.
Narayanswamy, D. J. McKnight and K. M. Johnson

Optoelectronic morphological processor for cervical cancer screening,
Optical Society of America Topical Meeting on Optical Computing, Salt Lake City, March 1995.
Narayanswamy, J. P. Sharpe, R. M. Turner, and K. M. Johnson

Morphological feature detection for cervical cancer screening,
SPIE Symposium on Nonlinear Image Processing VI, San Jose, Feb 1995, SPIE Proceedings Volume 2424.
R. Narayanswamy, J. P. Sharpe and K. M. Johnson

Optoelectronic image processing for cervical cancer screening,
SPIE Symposium on Biomedical 0ptics, Los Angeles, Jan 1994, SPIE Proceedings Volume 2132.
Narayanswamy, J. P. Sharpe and K. M. Johnson

Computational Imaging: Co-design of imaging systems and processing algorithms

Computational imaging is the design of imaging systems where the optics, sensors, and algorithms are designed at the same time. These imaging-systems can be described as maximizing information capture at the focal-plane, but not necessarily capturing aberration-free pictures like a traditional camera. Inverse filtering poses the question “what sort of object delivered this detected signal”. Subsequently, Physics-based constraints and optimization-methods are used to arrive at the results. 

Patent:
Optical imaging systems and methods utilizing nonlinear and/or spatially varying image processing
USPTO Patent number: 8717456,8068163, 7911501


Publication and Presentations:
Analytical optical solution of the extension of depth of field using cubic-phase Wavefront coding. Part II – Design and optimization of the cubic phase,
Journal of Optical Society of America A, April 2008
S. Bagheri, P. E. X. Silveira, R. Narayanswamy and D. Pucci de Farias

Robust iris recognition using Wavefront Coded Imaging systems
Presented at IEEE Robust 2008: Robust Biometrics – Understanding science and technology, November 2 – 5, 2008, Waikiki, Hawaii
E. X. Silveira, Lu Gao, R. Narayanswamy and E. R. Dowski Jr.

Design and optimization of a Wavefront Coded miniature camera, International Congress of Imaging Science, Rochester, NY May 2006
B. Wach, R. Narayanswamy, K. Kubala and E. R. Dowski Jr

Applications of Wavefront Coded Imaging
IS&T/SPIE Electronic Imaging 2004 – San Jose, CA
R. Narayanswamy, A. E Baron, V. Chumachenko and A. Greengard

Imaging System Design / Compression

This body of work looks at capturing images from a noisy channel and maximizing information. This includes the study of how to characterize digital imaging systems and represent images using primal-edges, an area of interest in the graphics and computer vision community currently.

Publications

Compact image representation by edge primitives
Computer Vision, Graphics, and Image Processing, January 1994.
Alter-Gartenberg, F. O. Huck and R. Narayanswamy

Transform-coding image compression for information efficiency and restoration,
Journal of Visual Communication and Image Representation, September 1993
E. Reichenbach, Z. Rahman and R. Narayanswamy

Characterizing of digital image gathering devices,
(Comment: this is the De-facto standard to characterize camera performance all over the world)
Optical Engineering, February 1991.
E. Reichenbach, S. K. Park, and R. Narayanswamy

Image recovery from edge primitives,
Journal of Optical Society of America A, May 1990.
Alter-Gartenberg, F. O. Huck and R. Narayanswamy

Presentations:

Restoration of subband coded images,
IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, May 1991.
R. Narayanswamy and Z. Rahman

Robust Image Coding with a Model of Adaptive Retinal Processing,
SPIE Symposium on Advances in Intelligent Systems, OPTCON ’90, Boston, November 1990.
R. Narayanswamy, R. Alter-Gartenberg, and F. O. Huck,

Image Coding by Edge Primitives,
SPIE 1990 Technical Symposium on Visual Communication and Image Processing, Lausanne, Switzerland
Alter-Gartenberg and R. Narayanswamy

Image characteristics recovery from bandpass filtering,
Optical Society of America and NASA Symposium on Applied Vision, San Francisco, California, July 1989.
R. Alter-Gartenberg and R. Narayanswamy

Image gathering and digital restoration: End-to-end optimization for visual quality,
Optical Society of America and NASA Symposium on Applied Vision, San Francisco, California, July 1989.
F. O. Huck, S. John, J. A. McCormick, and R. Narayanswamy

 

Menu