Toggle light / dark theme

Educators Needed for a Quantum Future

To build a workforce that can meet the expected future demand in the quantum sector, we need to train many more quantum-literate educators and marshal support for them.

In 2018 the US federal government passed the National Quantum Initiative Act, a program designed to accelerate the country’s quantum research and development activities. In the next decade, quantum information science and quantum technologies are expected to have a significant impact on the US economy, as well as on that of other countries. To fulfill that promise, the US will need a “quantum-capable” workforce that is conversant with the core aspects of quantum technologies and is large enough to meet the expected demand. But even now, as quantum-career opportunities are just starting to appear, supply falls behind demand; according to a 2022 report, there is currently only around one qualified candidate for every three quantum job openings [1]. We call for education institutions and funding agencies to invest significantly in workforce development efforts to prevent the worsening of this dearth.

Most of today’s jobs in quantum information science and technology (QIST) require detailed knowledge and skills that students typically gain in graduate-level programs [2]. As the quantum industry matures from having a research and development focus toward having a deployment focus, this requirement will likely relax. The change is expected to increase the proportion of QIST jobs compatible with undergraduate-level training. However, 86% of QIST-focused courses currently take place at PhD-granting research institutions [3]. Very few other undergraduate institutions offer opportunities to learn about the subject. To meet the future need, we believe that aspect needs to change with QIST education being incorporated into the curricula at predominantly undergraduate institutions and community colleges in the US. However, adding QIST classes to the curricula at these institutions will be no easy task.

Remote workers can now hold down many jobs thanks to AI tools

The pandemic also helped by normalizing remote work.

A new report by Vice.

“That’s the only reason I got my job this year,” one worker referred to only as Ben said of OpenAI’s tool.


Fulltimetraveller/iStock.

Artificial-intelligence tools can enable remote workers to not just more than one job, but to do them with time left to spare. Vice spoke anonymously to various workers holding down two to four full-time jobs with help from these tools and they all were in agreement that it is an ideal way to increase one’s income.

Internet access must become a human right or we risk ever-widening inequality, argues researcher

People around the globe are so dependent on the internet to exercise socioeconomic human rights such as education, health care, work, and housing that online access must now be considered a basic human right, a new study reveals.

Particularly in , can make the difference between people receiving an education, staying healthy, finding a home, and securing employment—or not.

Even if people have offline opportunities, such as accessing schemes or finding housing, they are at a comparative disadvantage to those with Internet access.

Doomsday Predictions Around ChatGPT Are Counter-Productive

The last few weeks have been abuzz with news and fears (well, largely fears) about the impact chatGPT and other generative technologies might have on the workplace. Goldman Sachs predicted 300 million jobs would be lost, while the likes of Steve Wozniak and Elon Musk asked for AI development to be paused (although pointedly not the development of autonomous driving).

Indeed, OpenAI chief Sam Altman recently declared that he was “a little bit scared”, with the sentiment shared by OpenAI’s chief scientist Ilya Sutskever, who recently said that “at some point it will be quite easy, if one wanted, to cause a great deal of harm”.


As fears mount about the jobs supposedly at risk from generative AI technologies like chatGPT, are these fears likely to prevent people from taking steps to adapt?

Mom, Dad, I Want To Be A Prompt Engineer

A new career is emerging with the spread of generative AI applications like ChatGPT: prompt engineering, the art (not science) of crafting effective instructions for AI models.

“In ten years, half of the world’s jobs will be in prompt engineering,” declared Robin Li, cofounder and CEO of Chinese AI giant, Baidu. “And those who cannot write prompts will be obsolete.”

That may be a bit of big tech hyperbole, but there’s no doubt that prompt engineers will become the wizards of the AI world, coaxing and guiding AI models into generating content that is not only relevant but also coherent and consistent with the desired output.

New Stanford report highlights the potential, costs, and risks of AI

AI-related jobs are on the rise but funding has taken a dip.

The technology world goes through waves of terminologies. Last year, was much about building the metaverse until it turned to artificial intelligence (AI) which has occupied the top news spots almost everywhere. To know whether this wave will last or wither off, one needs to look at some trusted sources in the domain, such as the one released by Stanford University.

For years now, the Institute for Human-Centered Artificial Intelligence at Stanford has been releasing its AI Index on an annual basis.


Black_Kira/iStock.

With AI occupying center stage for the past few months, the AI Index is a valuable resource to see what the future holds.

How to Survive the AI Revolution

Is artificial intelligence on the path to replacing people and jobs? Not quite. GSB professors argue that instead of viewing #AI as a competitor, we should be embracing it as a collaborator.

“The idea that AI is aimed toward automation is a misconception. There’s so much more opportunity for this technology to augment humans than the very narrow notion of replacing humans.” Professor Fei-Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence.
Link to

Elon Musk and more than 1,000 people sign an open letter calling for a pause on training AI systems more powerful than GPT-4

The non-profit said powerful AI systems should only be developed “once we are confident that their effects will be positive and their risks will be manageable.” It cited potential risks to humanity and society, including the spread of misinformation and widespread automation of jobs.

The letter urged AI companies to create and implement a set of shared safety protocols for AI development, which would be overseen by independent experts.

Apple cofounder Steve Wozniak, Stability AI CEO Emad Mostaque, researchers at Alphabet’s AI lab DeepMind, and notable AI professors have also signed the letter. At the time of publication, OpenAI CEO Sam Altman had not added his signature.

/* */