4343 Ascending House Chicago Southland Incubator

Blog

Google launches its third major operating system, Fuchsia

Google is officially rolling out a new operating system, called Fuchsia, to consumers. The release is a bit hard to believe at this point, but Google confirmed the news to 9to5Google, and several members of the Fuchsia team have confirmed it on Twitter. The official launch date was apparently yesterday. Fuchsia is certainly getting a quiet, anti-climactic release, as it’s only being made available to one device, the Google Home Hub, aka the first-generation Nest Hub. There are no expected changes to the UI or functionality of the Home Hub, but Fuchsia is out there. Apparently, Google simply wants to prove out the OS in a consumer environment.

Fuchsia’s one launch device was originally called the Google Home Hub and is a 7-inch smart display that responds to Google Assistant commands. It came out in 2018. The device was renamed the “Nest Hub” in 2019, and it’s only this first-generation device, not the second-generation Nest Hub or Nest Hub Max, that is getting Fuchsia. The Home Hub’s OS has always been an odd duck. When the device was released, Google was pitching a smart display hardware ecosystem to partners based on Android Things, a now-defunct Internet-of-things/kiosk OS. Instead of following the recommendations it gave to hardware partners, Google loaded the Home Hub with its in-house Google Cast Platform instead—and then undercut all its partners on.

Fuchsia has long been a secretive project. We first saw the OS as a pre-alpha smartphone UI that was ported to Android in 2017. In 2018, we got the OS running natively on a Pixelbook. After that, the Fuchsia team stopped doing its work in the open and stripped all UI work out of the public repository.
There’s no blog post or any fanfare at all to mark Fuchsia’s launch. Google’s I/O conference happened last week, and the company didn’t make a peep about Fuchsia there, either. Really, this ultra-quiet, invisible release is the most “Fuchsia” launch possible.

Fuchsia is something very rare in the world of tech: it’s a built-from-scratch operating system that isn’t based on Linux. Fuchsia uses a microkernel called “Zircon” that Google developed in house. Creating an operating system entirely from scratch and bringing it all the way to production sounds like a difficult task, but Google managed to do exactly that over the past six years. Fuchsia’s primary app-development language is Flutter, a cross-platform UI toolkit from Google. Flutter runs on Android, iOS, and the web, so writing Flutter apps today for existing platforms means you’re also writing Fuchsia apps for tomorrow.

The Nest Hub’s switch to Fuchsia is kind of interesting because of how invisible it should be. It will be the first test of this Fuchsia’s future-facing Flutter app support—the Google smart display interface is written in Flutter, so Google can take the existing interface, rip out all the Google Cast guts underneath, and plop the exact same interface code down on top of Fuchsia. Google watchers have long speculated that this was the plan all along. Rather than having a disruptive OS switch, Google could just get coders to write in Flutter and then it could seamlessly swap out the operating system.

So, unless we get lucky, don’t expect a dramatic hands-on post of Fuchsia running on the Nest Hub. It’s likely that there isn’t currently much to see or do with the new operating system, and that’s exactly how Google wants it. Fuchsia is more than just a smart-display operating system, though. An old Bloomberg report from 2018 has absolutely nailed the timing of Fuchsia so far, saying that Google wanted to first ship the OS on connected home devices “within three years”—the report turns three years old in July. The report also laid out the next steps for Fuchsia, including an ambitious expansion to smartphones and laptops by 2023.
Taking over the Nest Hub is one thing—no other team at Google really has a vested interest in the Google Cast OS (you could actually argue that the Cast OS is on the way out, as the latest Chromecast is switching to Android). Moving the OS onto smartphones and laptops is an entirely different thing, though, since the Fuchsia team would crash into the Android and Chrome OS divisions. Now you’re getting into politics.

Read More »

Evolving to a more equitable AI

The pandemic that has raged across the globe over the past year has shone a cold, hard light on many things—the varied levels of preparedness to respond; collective attitudes toward health, technology, and science; and vast financial and social inequities. As the world continues to navigate the covid-19 health crisis, and some places even begin a gradual return to work, school, travel, and recreation, it’s critical to resolve the competing priorities of protecting the public’s health equitably while ensuring privacy.

The extended crisis has led to rapid change in work and social behavior, as well as an increased reliance on technology. It’s now more critical than ever that companies, governments, and society exercise caution in applying technology and handling personal information. The expanded and rapid adoption of artificial intelligence (AI) demonstrates how adaptive technologies are prone to intersect with humans and social institutions in potentially risky or inequitable ways.

“Our relationship with technology as a whole will have shifted dramatically post-pandemic,” says Yoav Schlesinger, principal of the ethical AI practice at Salesforce. “There will be a negotiation process between people, businesses, government, and technology; how their data flows between all of those parties will get renegotiated in a new social data contract.”

AI in action
As the covid-19 crisis began to unfold in early 2020, scientists looked to AI to support a variety of medical uses, such as identifying potential drug candidates for vaccines or treatment, helping detect potential covid-19 symptoms, and allocating scarce resources like intensive-care-unit beds and ventilators. Specifically, they leaned on the analytical power of AI-augmented systems to develop cutting-edge vaccines and treatments.

While advanced data analytics tools can help extract insights from a massive amount of data, the result has not always been more equitable outcomes. In fact, AI-driven tools and the data sets they work with can perpetuate inherent bias or systemic inequity. Throughout the pandemic, agencies like the Centers for Disease Control and Prevention and the World Health Organization have gathered tremendous amounts of data, but the data doesn’t necessarily accurately represent populations that have been disproportionately and negatively affected—including black, brown, and indigenous people—nor do some of the diagnostic advances they’ve made, says Schlesinger.

For example, biometric wearables like Fitbit or Apple Watch demonstrate promise in their ability to detect potential covid-19 symptoms, such as changes in temperature or oxygen saturation. Yet those analyses rely on often flawed or limited data sets and can introduce bias or unfairness that disproportionately affect vulnerable people and communities.

“There is some research that shows the green LED light has a more difficult time reading pulse and oxygen saturation on darker skin tones,” says Schlesinger, referring to the semiconductor light source. “So it might not do an equally good job at catching covid symptoms for those with black and brown skin.”

AI has shown greater efficacy in helping analyze enormous data sets. A team at the Viterbi School of Engineering at the University of Southern California developed an AI framework to help analyze covid-19 vaccine candidates. After identifying 26 potential candidates, it narrowed the field to 11 that were most likely to succeed. The data source for the analysis was the Immune Epitope Database, which includes more than 600,000 contagion determinants arising from more than 3,600 species.

Other researchers from Viterbi are applying AI to decipher cultural codes more accurately and better understand the social norms that guide ethnic and racial group behavior. That can have a significant impact on how a certain population fares during a crisis like the pandemic, owing to religious ceremonies, traditions, and other social mores that can facilitate viral spread.

Lead scientists Kristina Lerman and Fred Morstatter have based their research on Moral Foundations Theory, which describes the “intuitive ethics” that form a culture’s moral constructs, such as caring, fairness, loyalty, and authority, helping inform individual and group behavior.

“Our goal is to develop a framework that allows us to understand the dynamics that drive the decision-making process of a culture at a deeper level,” says Morstatter in a report released by USC. “And by doing so, we generate more culturally informed forecasts.”

The research also examines how to deploy AI in an ethical and fair way. “Most people, but not all, are interested in making the world a better place,” says Schlesinger. “Now we have to go to the next level—what goals do we want to achieve, and what outcomes would we like to see? How will we measure success, and what will it look like?”

Assuaging ethical concerns
It’s critical to interrogate the assumptions about collected data and AI processes, Schlesinger says. “We talk about achieving fairness through awareness. At every step of the process, you’re making value judgments or assumptions that will weight your outcomes in a particular direction,” he says. “That is the fundamental challenge of building ethical AI, which is to look at all the places where humans are biased.”

Part of that challenge is performing a critical examination of the data sets that inform AI systems. It’s essential to understand the data sources and the composition of the data, and to answer such questions as: How is the data made up? Does it encompass a diverse array of stakeholders? What is the best way to deploy that data into a model to minimize bias and maximize fairness?

As people go back to work, employers may now be using sensing technologies with AI built in, including thermal cameras to detect high temperatures; audio sensors to detect coughs or raised voices, which contribute to the spread of respiratory droplets; and video streams to monitor hand-washing procedures, physical distancing regulations, and mask requirements.

Such monitoring and analysis systems not only have technical-accuracy challenges but pose core risks to human rights, privacy, security, and trust. The impetus for increased surveillance has been a troubling side effect of the pandemic. Government agencies have used surveillance-camera footage, smartphone location data, credit card purchase records, and even passive temperature scans in crowded public areas like airports to help trace movements of people who may have contracted or been exposed to covid-19 and establish virus transmission chains.

“The first question that needs to be answered is not just can we do this—but should we?” says Schlesinger. “Scanning individuals for their biometric data without their consent raises ethical concerns, even if it’s positioned as a benefit for the greater good. We should have a robust conversation as a society about whether there is good reason to implement these technologies in the first place.”

What the future looks like
As society returns to something approaching normal, it’s time to fundamentally re-evaluate the relationship with data and establish new norms for collecting data, as well as the appropriate use—and potential misuse—of data. When building and deploying AI, technologists will continue to make those necessary assumptions about data and the processes, but the underpinnings of that data should be questioned. Is the data legitimately sourced? Who assembled it? What assumptions is it based on? Is it accurately presented? How can citizens’ and consumers’ privacy be preserved?

As AI is more widely deployed, it’s essential to consider how to also engender trust. Using AI to augment human decision-making, and not entirely replace human input, is one approach.

“There will be more questions about the role AI should play in society, its relationship with human beings, and what are appropriate tasks for humans and what are appropriate tasks for an AI,” says Schlesinger. “There are certain areas where AI’s capabilities and its ability to augment human capabilities will accelerate our trust and reliance. In places where AI doesn’t replace humans, but augments their efforts, that is the next horizon.”

There will always be situations in which a human needs to be involved in the decision-making. “In regulated industries, for example, like health care, banking, and finance, there needs to be a human in the loop in order to maintain compliance,” says Schlesinger. “You can’t just deploy AI to make care decisions without a clinician’s input. As much as we would love to believe AI is capable of doing that, AI doesn’t have empathy yet, and probably never will.”

It’s critical for data collected and created by AI to not exacerbate but minimize inequity. There must be a balance between finding ways for AI to help accelerate human and social progress, promoting equitable actions and responses, and simply recognizing that certain problems will require human solutions.

Read More »

Embracing the rapid pace of AI

In a recent survey, “2021 Thriving in an AI World,” KPMG found that across every industry—manufacturing to technology to retail—the adoption of artificial intelligence (AI)

Read More »