Defense Media Network

KITWARE, INC

Harnessing Computer Vision and Deep Learning in Support of the Warfighter (Sponsored)

As trusted research and development (R&D) partners supporting the U.S. Military, Government, and Industry, Kitware is a leader in developing and advancing state-of-the-art technologies to save the lives of our warfighters.  We have built a strong reputation partnering with leading organizations in areas such as object detection and tracking, threat detection, image and video forensics, and scene understanding. In doing so, we are able to collaboratively develop and push informative and decision making solutions to the defense community.

Our computer vision team has a long-standing history in computer vision and the adoption and exploration of deep learning.  We develop and transition innovative algorithms and software for automated image and video analysis within our R&D programs in order to push technology advancement.  We continuously strive to provide tactical and decision-making advantages for improved situational awareness in the air, in space, on the ground, and at sea.

 

IMPROVED SITUATIONAL AWARENESS FOR SQUAD LEVEL SUPPORT

Kitware Graphic 2

Figure 2. Unblinking & untiring, THREAT X constantly monitors the entire field of view, limited only by SWaP, providing actionable alerts to soldiers on the ground.

The development and integration of new technology to produce accurate and actionable information for soldiers on the ground is a key element for operations in challenging environments. Small units gain an added edge on the battlefield, getting reliable information gathered from multiple platforms and multiple sensors without information overload.   The information extends their ability to sense and react to threats, providing them tools to finish their mission and finish it safely.

In support of Defense Advanced Research Projects Agency’s (DARPA) Squad-X Core Technologies Program, Kitware has been actively developing the THreat Reconnaissance and Exploitation from Audio-video Target eXtraction (THREAT X) system.

THREAT X uses computer vision and deep learning to automatically detect, discriminate, and alert squad members to non-line-of-sight (NLOS) threats on the ground.  This, in combination with Kitware’s extensive background in image and video analytics, has provided the ability to develop and field test tools that provide relevant information that soldiers can confidently act-upon while under pressure.  Poor resolution, moving platforms, occlusions and shadows all create significant sensing challenges in the battlefield. Deep learning approaches are used to improve person detection, feature extraction, object classification, and scene recognition. When these techniques are combined with inexpensive cameras mounted on UAVs, ground robots, and squad members, squad-level situational awareness is greatly enhanced. This squad-level sensing can help protect the lives of our soldiers, providing them an edge on the frontline.

 

 

COMBINED GLOBAL SURVEILLANCE FOR INTELLIGENCE

Kitware Graphic 3

Figure 3. Global Surveillance Using Commercial and National Assets Can Provide Analysts with Actionable Information for Tactical Decision Making.

At this day and age, the need to efficiently exploit commercial satellite data in combination with national assets is a challenging goal.  To be able to have access to an open, extensible software framework for automated satellite image and video analysis, would permit analysts from all disciplines the ability to do their job and do it better.  Kitware, under the Air Force Research Laboratory Electro-Optic Exploitation Branch (AFRL/RYAT), and in collaboration with the Rochester Institute of Technology (RIT) Digital Imaging and Remote Sensing (DIRS) Laboratory and Real Time Vision and Image Processing Laboratory; is building this web-based tool known as the VIsual Global InteLligence and ANalytics Toolkit (VIGILANT).  We are using deep learning-based object detectors to significantly improve performance beyond previous state-of-the-art techniques. This strategy successfully works on multiple image sources, including commercial satellite imagery, Wide Area Motion Imagery (WAMI), and Full Motion Video (FMV).   A predominate challenge to using deep learning is the demand for large amounts of labeled training data, which in many cases is not available for Intelligence, Surveillance, and Reconnaissance (ISR) problems and more.  To alleviate these training requirements Kitware is working with two approaches: synthetic training data and deep learning autoencoder models. Using these approaches and training and refining deep learning object detectors, we are able to perform object-level change detection to address analyst’s needs. Imagine the valuable information that can be disseminated in a fraction of the time if an analyst has access to a tool similar to VIGILANT. They would be able to select and analyze an area of interest (AOI) to monitor activity of specific targets visible in various types of imagery, all at the click of a mouse.  They could then develop a timeline of activity and identification of targets that may be in use to share with military decision makers to better understand what is happening in that AOI. This web-based tool has many use cases providing an analyst with validated, available information utilizing global resources for intelligence gathering.

 

MEDIA FORENSICS: DEVELOPING TECHNIQUE TO DETECT MANIPULATED IMAGES AND VIDEO

Kitware Graphic 4

Figure 4. Kitware has developed advanced deep-learning techniques to detect and authenticate video and image manipulations.

Fake news has become a reality and with that there is an urgent need to analyze visual media to detect adversarial manipulations aimed at progressing certain agendas. When our adversaries manipulate images and video, they are able to sow disinformation negatively impacting human perception and decision-making.   For government and military officials trying to make high-level decisions globally, this is a major challenge that is time consuming as analysts vet information, images, and video to determine their validity. The need for automatic, robust and scalable tools to detect and authenticate these sophisticated video and image manipulations is crucial in this age of disinformation.  DARPA has built a team of researchers focused specifically on these issues, available through the Media Forensics (MediFor) program. On MediFor, Kitware has developed advanced deep-learning techniques to detect subtle spatial discrepancies when adding fake contents to an image, and to detect temporal inconsistencies in videos when frames are removed or copied. To detect subtle spatial discrepancies introduced when adding fake contents, like a person to an image; we utilize a holistic manipulation detector.  This detector is trained using a convolutional neural network (CNN) with a variety of image manipulations in training data including splice, content aware remove, etc. Residual noise in the image is computed and small tiles are used to train the localized manipulation detector. In addition, for detecting temporal inconsistency in a video a CNN with spatiotemporal filters has been designed to effectively detect dropped and copied frames. National Institute of Standards and Technology (NIST) evaluation has shown compelling results for these techniques.

 

UNDERSTANDING COMPLEX, LARGE-SCALE DATA

Kitware Graphic 5

Figure 5. Kitware’s cutting-edge visualization techniques can translate complex large-scale data into useful information.

Complex, large-scale data is a daily challenge faced by all where information in all forms is available but overwhelming to work with and interpret.  How can this data be visualized in order to better understand its meaning and translate it into relevant and useful information? The need for rapid development of visual interfaces to decipher and better understand this information is the goal for the XDATA Program under DARPA.  Kitware’s data and analytics team, along with other visual interface and design experts from KnowledgeVis, University of Washington, Harvard University, Georgia Institute of Technology, and the University of Utah have been working towards producing cutting-edge visualization techniques to address specific end-user requirements in military and other applications.  Current tools are extremely challenging for novice programmers to use as they require knowledge of many systems and intensive programming to create complete applications. XDATA investigates interactive graphics techniques that reduce this programming burden on the user and automatically suggests effective parameters for visual representations.  According to the Technical Leader, Jeff Baumes, of the XDATA Program at Kitware, “Having an accessible, easy to use library of visualization tools will enable analysts to work through data more quickly and thoroughly than ever before.” This work can be applied and is beneficial to various domains, globally, pushing the important of understanding complex data and information dominance.  Its development can help analysts visualize networks, interpret Kiva microlending data, Bitcoin transactions by size over time, and more.

 

DEVELOPING INNOVATIVE SOLUTIONS FOR OUR PARTNERS

Experts in deep learning for vision since 2014 and computer vision since 2007, Kitware continuously pushes the envelope to develop innovative solutions for our partners through various R&D programs.  On the DARPA eXplainable Artificial Intelligence (XAI) program, we are working to develop Deeply EXplainable Artificial Intelligence (DEXAI) models. The current generation of AI systems offer tremendous benefits but their effectiveness will be limited by the machine’s inability to explain its decision and actions to users.  Our goal is to make Deep Neural Networks (DNNs) more compatible with human reasoning by making them explainable and interactive. The Deep Intermodal Video Analytics (DIVA) program through Intelligence Advanced Research Projects Activity (IARPA) seeks to develop robust automated activity detection in a multi-camera streaming video environment. We are on the testing & evaluation team, leading the data collection & annotation, baseline algorithms, and open-source framework task areas to support automated video monitoring to detect threats.  In collaboration with ObjectVideo for the Office of Naval Research (ONR), we are developing unsupervised approaches to small vessel detection from a distance for maritime object detection and tracking. This is being built off of our open source Video and Image-Based Retrieval and ANalysis Toolkit (VIBRANT), which is the culmination of years of R&D to develop robust capabilities for surveillance video analytics including content-based retrieval and alerting using behaviors, actions and appearance.  This can be found through our open source KitWare Image and Video Exploitation and Retrieval (KWIVER) toolkit that addresses challenging image and video analysis problems.  In addition to our showcased programs and technology development, we have many other programs requiring deep learning and computer vision.   Our computer vision team is working on Multi-Intelligence (MULTI-INT) Patterns of Life (POL), 3D reconstruction from aerial drone video and satellite imagery, satellite object detection with limited training datasets, the continued technology transition of our Wide Area Motion Imagery (WAMI) object detection and tracking system and more.  Please visit our www.kitware/computer-vision to see our focus areas and learn more about our computer vision team.

 

WHO IS KITWARE:

Founded in 1998, Kitware is small open source software business committed to developing and delivering cutting edge open source software products and services using advanced software methods and technologies.  We provide advanced R&D solutions in computer vision, data and analytics, high performance computing and visualization, medical computing, and quality software process. Many of our proven open source software platforms include the Visualization Toolkit(VTK), CMake, ParaView, and the Insight Segmentation and Registration Toolkit (ITK).  We focus on innovation and practice principles of open source, which are embodied in our organizational business model, corporate values, and engineering processes.  Our extensive collaboration with government agencies, laboratories, educational institutions, commercial companies, and global communities helps us to continue to innovate and develop relevant technologies for our partners.  Kitware empowers a unique culture and an energized work environment built on the pursuit of new ideas, passion, and meaningful work impact; promoting synergy across teams to best meet our customer’s needs. We are headquartered out of Clifton Park NY with additional offices in North Carolina (NC), New Mexico (NM), Virginia (VA), and Lyon France. Please visit us at www.kitware.com and www.kitware.com/cv60/ to learn more about Kitware and how to leverage Kitware’s expertise.