Nasdaq Women in Technology: Niharika Sharma, Senior Software Engineer, Nasdaq’s Machine Intelligence Lab
Niharika Sharma is a Senior Software Engineer for Nasdaq’s Machine Intelligence Lab. She designs systems that gather, process and apply machine learning/natural language processing technologies on natural language data, generating valuable insights to support business decisions. Over the past years, she worked on Natural Language Generation (NLG) and Surveillance Automation for Nasdaq Advisory Services. We sat down with Niharika to learn more about how she got her start in computer science and how she approaches challenges in her career.
Can you describe your day-to-day as a senior software engineer at Nasdaq?
My day-to-day work involves collaborating with Data Scientists to solve problems, ideating business possibilities with product teams and working with Data/Software Engineers to transform ideas into solutions.
How did you become involved in the technology industry, and how has technology influenced your role?
My first exposure to Computer Science was a Logo programming class that I took as a junior in high school. After that, I took a couple of coding classes for fun.
When it came to choosing a college major, my high school Mathematics teacher suggested I consider a career in Software Engineering. At first, I thought, “Programming?! That’s too geeky!”. I liked coding, but I never wanted to be that nerd who sits in a cube staring at a computer all day. For college, I chose to study Chemistry at Delhi University, but a few months into the course, I realized technology was where I belonged, and I eventually pivoted to Engineering.
A decade later, I admit that it was the best decision I ever made. I found the concepts and problem solving so engaging that after obtaining my degree, I took a leap of faith and moved to the U.S. to pursue a Masters in Computer Science from Northeastern University. In the final semester, I
Machine learning and conventional programming language are two different approaches to computer programming languages that yields different outcomes or expectations.
By definition, Machine Learning is a field of software engineering that enables PCs to learn without being unequivocally modified. AI shows PCs the capacity to take care of issues and perform complex errands all alone. Much of the time, issues unraveled utilizing AI depend on the PC’s learning experience for which they wouldn’t have been settled by ordinary programming dialects. Such issues can be face acknowledgment, driving, and ailments’ conclusion. With regular programming language, then again, the conduct of the PC is coded by first making a reasonable calculation that keeps predesigned sets of rules.
In other words, machine learning depends on a rather different form of augmented analytics where input and output data are fed into algorithms. The algorithms then create the program. On the contrary, conventional programming languages involve manually creating programs by providing input data. The computer then generates an output based on programming logic. For instance, you can easily predict consumer behavior through trained machine learning algorithms.
Another significant contrast between machine learning and conventional programming language is the precision of expectations. Conventional programming language relies upon calculations inside an assortment of info boundaries. Machine Learning then again gathers information dependent on past occasions (verifiable information) which construct a model that is equipped for adjusting freely to new arrangements of information to create solid and repeatable outcomes. This sort of self-learning models can’t be worked with customary programming dialects.
However, with machine learning, there are no restrictions on the number of data sets and models that can be generated since the built models are capable of learning independently. As long as you have enough processor power and memory, you can use as many input parameters and
Apple’s Vice President of Platform Architecture offers insight on the new A14 Bionic processor, the importance of machine learning, and how Apple continues to separate itself from its competitors in a new interview.
According to Apple, the A14 Bionic offers a 30% boost for CPU performance, while using a new four-core graphics architecture for a 30% faster graphics boost, compared against the A12 Bionic used in the iPad Air 3. Against the A13, the benchmarks suggest the A14 offers a 19% improvement in CPU performance and 27% for graphics.
In an interview with German magazine Stern, Apple’s Vice President of Platform Architecture, Tim Millet, offered some insight into what makes the A14 Bionic processor tick.
Millet explains that while Apple did not invent machine learning and neural engines — “the foundations for this go back many decades” — they did help to find ways to accelerate the process.
Machine learning requires neural networks to be trained on complex data systems, which, until recently, did not exist. As storage grew larger, machines could take advantage of larger data sets, but the learning process was still relatively slow. However, in the early 2010s, this all began to change.
Fast forward to 2017 when the iPhone X released— the first iPhone that featured Face ID. This process was powered by the A11 chip and was capable of processing 600 billion arithmetic operations per second.
The five-nanometer A14 Bionic chip, which will debut the new iPad Air set to release in October, can calculate over 18 times as many operations — up to 11 trillion per second.
“We are excited about the emergence of machine learning and how it enables a completely new class,” Millet told Stern. “It takes my breath away when I see what people can do with the A14
“Earlier this year, I attended a conference and was shocked to find that you could actually buy voting machines on eBay. So I bought one, two months ago, and have been able to open it up and look at the chips.”
Beatrice Atobatele is trying to hack one of the most commonly used voting machines in the US, to look for security vulnerabilities, but not with any criminal intentions.
Beatrice is actually one of more than 200 people who have signed up to a volunteer group of security experts and hackers called the Election Cyber Surge.
And by understanding how this machine works, she hopes she can ensure any vulnerabilities are fixed.
“I’ve bypassed the authentication itself,” she says.
“I’m still learning and trying to find any new vulnerabilities that might not be known about yet.”
The problem with US elections, Beatrice and others say, is how disjointed they are.
Most estimates suggest there are about 8,000 separate election jurisdictions.
The equipment and voting methods vary dramatically.
And every step of the process is vulnerable to hackers and human error.
In the polling booth, there are many different systems, from direct-recording electronic voting machines to ballot-marking devices and paper-based systems.
And the more digitised and connected a system is, the higher the risk of some sort of cyber-interference.
Like all the volunteers, Beatrice’s research is conducted outside of her day job.
And as a keen footballer, and mother to two soccer-obsessed daughters in New York City, she has to fit the volunteering around a busy schedule.
She didn’t plan to get into cyber-security at all.
But 17 years ago, she lost more than $1,000 (£775) after hackers