Tag: Vision

14
Oct
2020
Posted in computer

Microsoft partners with Team Gleason to build a computer vision dataset for ALS

Microsoft and Team Gleason, the nonprofit organization founded by NFL player Steve Gleason, today launched Project Insight to create an open dataset of facial imagery of people with amyotrophic lateral sclerosis (ALS). The organizations hope to foster innovation in computer vision and broaden the potential for connectivity and communication for people with accessibility challenges.

Microsoft and Team Gleason assert that existing machine learning datasets don’t represent the diversity of people with ALS, a condition that affects as many as 30,000 people in the U.S. This results in issues accurately identifying people, due to breathing masks, droopy eyelids, watery eyes, and dry eyes from medications that control excessive saliva.

Project Insight will investigate how to use data and AI with the front-facing camera already present in many assistive devices to predict where a person is looking on a screen. Team Gleason will work with Microsoft’s Health Next Enable team to gather images of people with ALS looking at their computer so it can train AI models more inclusively. (Microsoft’s Health Next team, which is within its Health AI division, focuses on AI and cloud-based services to improve health outcomes.)  Participants will be given a brief medical history questionnaire and be prompted through an app to submit images of themselves using their computer.

“ALS progression can be as diverse as the individuals themselves,” Team Gleason chief impact officer Blair Casey said. “So accessing computers and communication devices should not be a one-size-fits-all. We will capture as much information as possible from 100 people living with ALS so we can develop tools for all to effectively use.”

Microsoft and Team Gleason estimate that the project will collect and share 5TB of anonymized data with researchers on data science platforms like Kaggle and GitHub.

“There is a significant lack of disability-specific data that is

12
Oct
2020
Posted in computer

How the architecture of new home security vision systems affects choice of memory technology

A camera or a computer: How the architecture of new home security vision systems affects choice of memory technology

A long-forecast surge in the number of products based on artificial intelligence (AI) and machine learning (ML) technologies is beginning to reach mainstream consumer markets.

It is true that research and development teams have found that, in some applications such as autonomous driving, the innate skill and judgement of a human is difficult, or perhaps even impossible, for a machine to learn. But while in some areas the hype around AI has run ahead of the reality, with less fanfare a number of real products based on ML capabilities are beginning to gain widespread interest from consumers. For instance, intelligent vision-based security and home monitoring systems have great potential: analyst firm Strategy Analytics forecasts growth in the home security camera market of more than 50% in the years between 2019 and 2023, from a market value of US$8 billion to US$13 billion.

The development of intelligent cameras is possible because one of the functions best suited to ML technology is image and scene recognition. Intelligence in home vision systems can be used to:
– Detect when an elderly or vulnerable person has fallen to the ground and is potentially injured
– Monitor that the breathing of a sleeping baby is normal
– Recognise the face of the resident of a home (in the case of a smart doorbell) or a pet (for instance in a smart cat flap), and automatically allow them to enter
– Detect suspicious or unrecognised activity outside the home and trigger an intruder alarm

These new intelligent vision systems for the home, based on advanced image signal processors (ISPs), are in effect function-specific computers. The latest products in this category have adopted computer-like architectures which depend for

08
Oct
2020
Posted in computer

Computer vision

Teaching computers to see more sharply with processing in the cloud


At the Lawrence J. Ellison Institute for Transformative Medicine of USC, scientists have trained a neural network to spot different types of breast cancer on a small data set of less than 1,000 images. Instead of educating the AI system to distinguish between groups of samples, the researchers taught the network to recognize the visual “tissue fingerprint” of tumors so that it could work on much larger, unannotated data sets.

Halfway across the country in suburban Chicago, Oracle’s construction and engineering group is working with video-camera and software companies to build an artificial intelligence system that can tell from live video feeds—with up to 92% accuracy—whether construction workers are wearing hard hats and protective vests and practicing social distancing.

Such is the promise of computer vision, whereby machines are trained to interpret and understand the physical world around them, oftentimes spotting and comparing fine visual cues the human eye can miss. The fusion of computer vision with deep learning (a branch of artificial intelligence that employs neural networks), along with advances in graphics processors that run many calculations in parallel and the availability of huge data sets, has led to leaps in accuracy.

Now, a generation of GPUs equipped with even more circuitry for parsing photos and video and wider availability of cloud data centers for training statistical prediction systems have quickened development in self-driving cars, oil and gas exploration, insurance assessment, and other fields.

“Devoting more money to large data centers makes it possible to train problems of any size, so the decision can become simply an economic one: How many dollars should be devoted to finding the best solution to a given data set?”


David Lowe, Professor Emeritus of Computer Science, University of British Columbia


“Machine learning has completely changed computer vision since 2012, as

08
Oct
2020
Posted in computer

Exer Labs raises $2 million and launches computer vision app for Peloton-style coached workouts

Exer Labs has raised $2 million in funding and it has unveiled its AI and computer vision Exer Studio app for the Mac that captures your movements for coaching advice and Peloton-style leaderboards for workouts.

The Denver-based fitness startup captures your movements with your laptop’s camera and evaluates your form. It can share your results with friends, fitness coaches, or others to see where you rank on the leaderboards, motivating you to work harder or faster.

CEO Zaw Thet said in an interview with VentureBeat that Exer relies on edge-based AI (meaning it uses your smartphone’s computing power) and computer vision to power its motion coaching platform. It offers real-time audio and visual feedback via a Mac (and its camera), on almost any type of human motion, without having a human in the loop. The mission is to help people move, train, and play better. Coaches can use the app for classes and see who needs help.

“Gyms have closed and are having trouble opening back up,” Thet said. “There are more than 300,000 professionals who aren’t able to train people in person. They have switched to streaming workouts, but it’s hard to keep people engaged on Zoom.”

The company has now raised $4.5 million to date. Investors in the latest round include GGV, Jerry Yang’s AME Cloud Ventures, Morado Ventures, Range VC, Service Provider Capital, Shatter Fund, MyFitnessPal cofounders Mike Lee and Albert Lee, and existing investors Signia Venture Partners and former Zynga executive David Ko.

Fitness in the pandemic

Exer can track movements for people doing workouts.

Above: Exer Studio can track movements for people doing workouts.

Image Credit: Exer

Thet said the Mac app uses the camera to capture your movements, so it knows how many repetitions you’ve done and whether your form is a match for the way the exercise is supposed to be

03
Oct
2020
Posted in technology

“Covid-19 has Compressed VC’s Vision of 5 Years into 5 Months”


5 min read

Opinions expressed by Entrepreneur contributors are their own.


Covid-19 has brought about a paradigm shift across industries, and venture investing is no exception. The pandemic and ensuing lockdowns have influenced consumer behaviour and preferences majorly, may be even permanently in some cases.

To understand how the investment landscape has transitioned amidst the Covid-19 pandemic and its impact on long-term investing, Entrepreneur India interacted with Vinnie Lauria, founding partner, Golden Gate Ventures, an early-stage venture capital (VC) firm in Southeast Asia. Lauria shared his views on the nitty-gritty of investments along with the bounce back plan for businesses.

Impact of Pandemic on Investment and Bounce Back Approach

Most of the work operations continue to be remote in Singapore and Indonesia despite lifting of lockdown restrictions. “In a market like Indonesia, people are working from home and locked down. There were certain presumptions about bounce back in a short time-frame,” says Lauria.

It is essential for term investors to check the growth level of companies and that their growth plans for the next decade has to be reflected over, he added.

Even though economies across the world have been hit by the pandemic, certain verticals, such as edtech and healthtech, have grown due to this phase.

VCs look upon trends, growth plan and future of the company while making an investment decision. However, COVID-19 has compressed the vision of five years into five months. At the same time, the growth scale has become swift over the past few months, especially for digital businesses. For instance, online education, grocery, health and tech startups have grown immensely, gaining from the opportunity in the market.

Investment Prospects During the Pandemic

Vinnie said that even during the pandemic, investments on both existing as well as new projects are being made.