A camera or a computer: How the architecture of new home security vision systems affects choice of memory technology
A long-forecast surge in the number of products based on artificial intelligence (AI) and machine learning (ML) technologies is beginning to reach mainstream consumer markets.
It is true that research and development teams have found that, in some applications such as autonomous driving, the innate skill and judgement of a human is difficult, or perhaps even impossible, for a machine to learn. But while in some areas the hype around AI has run ahead of the reality, with less fanfare a number of real products based on ML capabilities are beginning to gain widespread interest from consumers. For instance, intelligent vision-based security and home monitoring systems have great potential: analyst firm Strategy Analytics forecasts growth in the home security camera market of more than 50% in the years between 2019 and 2023, from a market value of US$8 billion to US$13 billion.
The development of intelligent cameras is possible because one of the functions best suited to ML technology is image and scene recognition. Intelligence in home vision systems can be used to:
– Detect when an elderly or vulnerable person has fallen to the ground and is potentially injured
– Monitor that the breathing of a sleeping baby is normal
– Recognise the face of the resident of a home (in the case of a smart doorbell) or a pet (for instance in a smart cat flap), and automatically allow them to enter
– Detect suspicious or unrecognised activity outside the home and trigger an intruder alarm
These new intelligent vision systems for the home, based on advanced image signal processors (ISPs), are in effect function-specific computers. The latest products in this category have adopted computer-like architectures which depend for
Data analytics, deep learning, and other AI/ML applications drive multi-billion-dollar flash memory market
Virtual Flash Memory Summit (FMS), the world’s premiere flash memory conference and exposition, announces a major program track on Storage for Artificial Intelligence and Machine Learning (AI/ML) Applications.
The new track features talks on storage strategies, model training, workloads, NVMe and logical volumes, persistent memory, software-defined architectures, and accelerating the GPU data path. It also includes panels on model scalability and long-term horizons, plus a keynote by Geoffrey Burr, Distinguished Researcher at IBM Almaden Research Center. Virtual Flash Memory Summit 2020 will be held on November 10-12 and expects to draw more than 6,000 attendees.
AI/ML applications require vast amounts of low latency, high-throughput flash storage. Cloud and enterprise data center architectures must be optimized to train deep neural networks and analyze petabyte-scale datasets, all while satisfying critical cost constraints.
The rapid adoption of AI/ML applications is fueling tremendous growth in demand for flash memory. According to IDC, the combined flash memory and SSD (Solid State Drive) markets will grow to almost $90 billion in 20221.
“AI/ML is the fastest-growing application in data centers today,” said Chuck Sobey, Conference Chairperson for Flash Memory Summit. “Advances like persistent memory, computational storage, QLC technology, and emerging non-volatile memories must be combined with rapid progress in 3D NAND flash to meet the needs of AI/ML for more data, faster.”
Now in its 15th year, Flash Memory Summit features the latest technology trends, the most innovative products, and the broadest coverage of this rapidly expanding market. In 2019, FMS drew over 6,000 registrants and over 120 exhibitors. The conference also features marketing and market research sessions plus sessions sponsored by NVM Express®, SNIA, and TechTarget, as well as a full-day free track by IDC on the latest market