Toggle light / dark theme

The current health crisis has snowballed into a world economic crisis, where every old business norm has been challenged. In such times, we cannot fall back on old ways of doing our business. Today, three technologies

Internet of Things(IoT), Artificial Intelligence (AI), and blockchain are poised to change every aspect of enterprises and our lives. Now more than ever, organisations realise the pertinent need for a robust digital foundation for their businesses as their future plans have been disrupted. “To achieve that level of business sophistication holistically it is imperative that there is a seamless flow of data across all the functions of an enterprise. That requires connected data that is secure and one that is driven by connected intelligence,” Guruprasad Gaonkar, JAPAC SaaS Leader for ERP & Digital Supply Chain, Oracle told Moneycontrol in an interview:

How is India reacting to emerging technologies as compared to other Asia Pacific (APAC) regions?

It turns out that you don’t need a computer to create an artificial intelligence. In fact, you don’t even need electricity.

In an extraordinary bit of left-field research, scientists from the University of Wisconsin–Madison have found a way to create artificially intelligent glass that can recognize images without any need for sensors, circuits, or even a power source — and it could one day save your phone’s battery life.

“We’re always thinking about how we provide vision for machines in the future, and imagining application specific, mission-driven technologies,” researcher Zongfu Yu said in a press release. “This changes almost everything about how we design machine vision.”

Technology giant Apple Inc. (NASDAQ: AAPL) has bought a small Canadian start-up company to help it improve machine-learning and artificial intelligence (AI).

Apple has purchased Waterloo, Ontario-based company Inductiv Inc., adding to more than a dozen AI-related acquisitions in recent years. Inductiv develops technology that uses AI to automate the task of identifying and correcting errors in data. Having clean data is important for machine learning, a popular and powerful type of artificial intelligence that helps software improve with less human intervention.

The engineering team from Inductiv joined Apple in recent weeks to work on Siri, machine learning and data science. The Inductiv acquisition is part of Apple’s broader machine-learning strategy. Apple has been upgrading the underlying technology that goes into the Siri digital assistant and other AI-powered products.

Dozens of journalists have been sacked after Microsoft decided to replace them with artificial intelligence software.

Staff who maintain the news homepages on Microsoft’s MSN website and its Edge browser – used by millions of Britons every day – have been told that they will be no longer be required because robots can now do their jobs.

Around 27 individuals employed by PA Media – formerly the Press Association – were told on Thursday that they would lose their jobs in a month’s time after Microsoft decided to stop employing humans to select, edit and curate news articles on its homepages.

Kitty Hawk is shutting down its Flyer program, the aviation startup’s inaugural moonshot to develop an ultralight electric flying car designed for anyone to use.

The company, backed by Google co-founder Larry Page and led by Sebastian Thrun, said it’s now focused on scaling up Heaviside, a sleeker, more capable (once secret) electric aircraft that is quiet, fast and can fly and land anywhere autonomously.

Kitty Hawk is laying off most of Flyer’s 70-person team, TechCrunch learned. A few employees will be brought over to work on Heaviside, according to the company. Those who are laid off will receive at least 20 weeks of pay, plus tenure, depending on how long they were with the company. Former workers will also receive their annual bonus and have their health insurance covered through the end of the year. The company said it will set up placement services to help people find employment.

A team of researchers from OpenAI recently published a paper describing GPT-3, a deep-learning model for natural-language with 175 billion parameters, 100x more than the previous version, GPT-2. The model is pre-trained on nearly half a trillion words and achieves state-of-the-art performance on several NLP benchmarks without fine-tuning.

In paper published on arXiv, a team of over 30 co-authors described the model and several experiments. The researchers’ goal was to produce an NLP system that performs well on a variety of tasks with little or no fine-tuning, and previous work had indicated that larger models might be the solution. To test that hypothesis, the team increased the size of their previous model, GPT-2, from 1.5 billion parameters to 175 billion. For training, the team collected several datasets, including the Common Crawl dataset and the English-language Wikipedia. The model was evaluated against several NLP benchmarks, matching state-of-the-art performance on “closed-book” question-answering tasks and setting a new record for the LAMBADA language modeling task.

OpenAI made headlines last year with GPT-2 and their decision not to release the 1.5 billion parameter version of the trained model due to “concerns about malicious applications of the technology.” GPT-2 is one of many large-scale NLP models based on the Transformer architecture. These models are pre-trained on large text corpora, such as the contents Wikipedia, using self-supervised learning. In this scenario, instead of using a dataset containing inputs paired with expected outputs, the model is given a sequence of text with words “masked” and it must learn to predict the masked words based on the surrounding context. After this pre-training, the models are then fine-tuned with a labelled benchmark dataset for a particular NLP task, such as question-answering.

This paper describes the design, implementation, and evaluation of VanarSena, an automated fault finder for mobile applications (“apps’‘). The techniques in VanarSena are driven by a study of 25 million real-world crash reports of Windows Phone apps reported in 2012. Our analysis indicates that a modest number of root causes are responsible for many observed failures, but that they occur in a wide range of places in an app, requiring a wide coverage of possible execution paths. VanarSena adopts a “greybox’’ testing method, instrumenting the app binary to achieve both coverage and speed. VanarSena runs on cloud servers: the developer uploads the app binary; VanarSena then runs several app “monkeys’’ in parallel to emulate user, network, and sensor data behavior, returning a detailed report of crashes and failures. We have tested VanarSena with 3000 apps from the Windows Phone store, finding that 1108 of them had failures; VanarSena uncovered 2969 distinct bugs in existing apps, including 1227 that were not previously reported. Because we anticipate VanarSena being used in regular regression tests, testing speed is important. VanarSena uses two techniques to improve speed. First, it uses a “hit testing’’ method to quickly emulate an app by identifying which user interface controls map to the same execution handlers in the code. Second, it generates a ProcessingCompleted event to accurately determine when to start the next interaction. These features are key benefits of VanarSena’s greybox philosophy.

2014-06

http://hdl.handle.net/1721.1/110759