Toggle light / dark theme

Microsoft-owned GitHub is launching its Copilot AI tool today, which helps suggest lines of code to developers inside their code editor. GitHub originally teamed up with OpenAI last year to launch a preview of Copilot, and it’s generally available to all developers today. Priced at US$10 per month or US$100 a year, GitHub Copilot is capable of suggesting the next line of code as developers type in an integrated development environment (IDE) like Visual Studio Code, Neovim, and JetBrains IDEs. Copilot can suggest complete methods and complex algorithms alongside boilerplate code and assistance with unit testing. More than 1.2 million developers signed up to use the GitHub Copilot preview over the past 12 months, and it will remain a free tool for verified students and maintainers of popular open-source projects. In files where it’s enabled, GitHub says nearly 40 percent of code is now being written by Copilot.

“Over the past year, we’ve continued to iterate and test workflows to help drive the ‘magic’ of Copilot,” Ryan J. Salva, VP of product at GitHub, told TechCrunch via email. “We not only used the preview to learn how people use GitHub Copilot but also to scale the service safely.”

“We specifically designed GitHub Copilot as an editor extension to make sure nothing gets in the way of what you’re doing,” GitHub CEO Thomas Dohmke says in a blog post(Opens in a new window). “GitHub Copilot distills the collective knowledge of the world’s developers into an editor extension that suggests code in real-time, to help you stay focused on what matters most: building great software.”

The talk is provided on a Free/Donation basis. If you would like to support my work then you can paypal me at this link:
https://paypal.me/wai69
Or to support me longer term Patreon me at: https://www.patreon.com/waihtsang.

Unfortunately my internet link went down in the second Q&A session at the end and the recording cut off. Shame, loads of great information came out about FPGA/ASIC implementations, AI for the VR/AR, C/C++ and a whole load of other riveting and most interesting techie stuff. But thankfully the main part of the talk was recorded.

TALK OVERVIEW
This talk is about the realization of the ideas behind the Fractal Brain theory and the unifying theory of life and intelligence discussed in the last Zoom talk, in the form of useful technology. The Startup at the End of Time will be the vehicle for the development and commercialization of a new generation of artificial intelligence (AI) and machine learning (ML) algorithms.

We will show in detail how the theoretical fractal brain/genome ideas lead to a whole new way of doing AI and ML that overcomes most of the central limitations of and problems associated with existing approaches. A compelling feature of this approach is that it is based on how neurons and brains actually work, unlike existing artificial neural networks, which though making sensational headlines are impeded by severe limitations and which are based on an out of date understanding of neurons form about 70 years ago. We hope to convince you that this new approach, really is the path to true AI.

In the last Zoom talk, we discussed a great unifying of scientific ideas relating to life & brain/mind science through the application of the mathematical idea of symmetry. In turn the same symmetry approach leads to a unifying of a mass of ideas relating to computer and information science. There’s been talk in recent years of a ‘master algorithm’ of machine learning and AI. We’ll explain that it goes far deeper than that and show how there exists a way of unifying into a single algorithm, the most important fundamental algorithms in use in the world today, which relate to data compression, databases, search engines and also existing AI/ML. Furthermore and importantly this algorithm is completely fractal or scale invariant. The same algorithm which is able to perform all these functionalities is able to run on a micro-controller unit (MCU), mobile phone, laptop and workstation, going right up to a supercomputer.

The application and utility of this new technology is endless. We will discuss the road map by which the sort of theoretical ideas I’ve been discussing in the Zoom, academic and public talks over the past few years, and which I’ve written about in the Fractal Brain Theory book, will become practical technology. And how the Java/C/C++ code running my workstation and mobile phones will become products and services.

From there, they ran flight tests using a specially designed motion-tracking system. Each electroluminescent actuator served as an active marker that could be tracked using iPhone cameras. The cameras detect each light color, and a computer program they developed tracks the position and attitude of the robots to within 2 millimeters of state-of-the-art infrared motion capture systems.

“We are very proud of how good the tracking result is, compared to the state-of-the-art. We were using cheap hardware, compared to the tens of thousands of dollars these large motion-tracking systems cost, and the tracking results were very close,” Kevin Chen says.

In the future, they plan to enhance that motion tracking system so it can track robots in real-time. The team is working to incorporate control signals so the robots could turn their light on and off during flight and communicate more like real fireflies. They are also studying how electroluminescence could even improve some properties of these soft artificial muscles, Kevin Chen says.

Microplastics are found nearly everywhere on Earth and can be harmful to animals if they’re ingested. But it’s hard to remove such tiny particles from the environment, especially once they settle into nooks and crannies at the bottom of waterways. Now, researchers in ACS’ Nano Letters have created a light-activated fish robot that “swims” around quickly, picking up and removing microplastics from the environment.

Because microplastics can fall into cracks and crevices, they’ve been hard to remove from aquatic environments. One that’s been proposed is using small, flexible and self-propelled robots to reach these pollutants and clean them up. But the used for soft robots are hydrogels and elastomers, and they can be damaged easily in aquatic environments. Another material called mother-of-pearl, also known as nacre, is strong and flexible, and is found on the inside surface of clam shells. Nacre layers have a microscopic gradient, going from one side with lots of calcium carbonate mineral-polymer composites to the other side with mostly a silk protein filler. Inspired by this , Xinxing Zhang and colleagues wanted to try a similar type of gradient structure to create a durable and bendable material for .

The researchers linked β-cyclodextrin molecules to sulfonated graphene, creating composite nanosheets. Then solutions of the nanosheets were incorporated with different concentrations into polyurethane latex mixtures. A layer-by-layer assembly method created an ordered concentration gradient of the nanocomposites through the material from which the team formed a tiny fish robot that was 15-mm (about half-an-inch) long. Rapidly turning a near-infrared light laser on and off at a fish’s tail caused it to flap, propelling the robot forward. The robot could move 2.67 body lengths per second—a that’s faster than previously reported for other soft swimming robots and that is about the same speed as active phytoplankton moving in water. The researchers showed that the swimming fish robot could repeatedly adsorb nearby polystyrene microplastics and transport them elsewhere. The material could also heal itself after being cut, still maintaining its ability to adsorb microplastics.

By combining two distinct approaches into an integrated workflow, Singapore University of Technology and Design (SUTD) researchers have developed a novel automated process for designing and fabricating customized soft robots. Their method, published in Advanced Materials Technologies, can be applied to other kinds of soft robots—allowing their mechanical properties to be tailored in an accessible manner.

Though robots are often depicted as stiff, metallic structures, an emerging class of pliable machines known as is rapidly gaining traction. Inspired by the flexible forms of living organisms, soft robots have wide applications in sensing, movement, object grasping and manipulation, among others. Yet, such robots are still mostly fabricated through manual casting techniques—limiting the complexity and geometries that can be achieved.

“Most fabrication approaches are predominantly manual due to a lack of standard tools,” said SUTD Assistant Professor Pablo Valdivia y Alvarado, who led the study. “But 3D printing or additive manufacturing is slowly coming into play as it facilitates repeatability and allows more complex designs—improving quality and performance.”

An international team of scientists, led by the University of Leeds, have assessed how robotics and autonomous systems might facilitate or impede the delivery of the UN Sustainable Development Goals (SDGs).

Their findings identify key opportunities and key threats that need to be considered while developing, deploying and governing robotics and autonomous systems.

The key opportunities robotics and autonomous systems present are through autonomous task completion, supporting human activities, fostering innovation, enhancing and improving monitoring. Emerging threats relate to reinforcing inequalities, exacerbating , diverting resources from tried-and-tested solutions, and reducing freedom and privacy through inadequate governance.

Artificial intelligence (AI) and machine learning techniques have proved to be very promising for completing numerous tasks, including those that involve processing and generating language. Language-related machine learning models have enabled the creation of systems that can interact and converse with humans, including chatbots, smart assistants, and smart speakers.

To tackle dialog-oriented tasks, language models should be able to learn high-quality dialog representations. These are representations that summarize the different ideas expressed by two parties who are conversing about specific topics and how these dialogs are structured.

Researchers at Northwestern University and AWS AI Labs have recently developed a self-supervised learning model that can learn effective dialog representations for different types of dialogs. This model, introduced in a paper pre-published on arXiv, could be used to develop more versatile and better performing dialog systems using a limited amount of training data.

An engineer from the Johns Hopkins Center for Language and Speech Processing has developed a machine learning model that can distinguish functions of speech in transcripts of dialogs outputted by language understanding, or LU, systems in an approach that could eventually help computers “understand” spoken or written text in much the same way that humans do.

Developed by CLSP Assistant Research Scientist Piotr Zelasko, the new model identifies the intent behind words and organizes them into categories such as “Statement,” “Question,” or “Interruption,” in the final transcript: a task called “dialog act recognition.” By providing other models with a more organized and segmented version of text to work with, Zelasko’s model could become a first step in making sense of a conversation, he said.

“This new method means that LU systems no longer have to deal with huge, unstructured chunks of text, which they struggle with when trying to classify things such as the topic, sentiment, or intent of the text. Instead, they can work with a series of expressions, which are saying very specific things, like a question or interruption. My model enables these systems to work where they might have otherwise failed,” said Zelasko, whose study appeared recently in Transactions of the Association for Computational Linguistics.

As robots are gradually introduced into various real-world environments, developers and roboticists will need to ensure that they can safely operate around humans. In recent years, they have introduced various approaches for estimating the positions and predicting the movements of robots in real-time.

Researchers at the Universidade Federal de Pernambuco in Brazil have recently created a new deep learning model to estimate the pose of robotic arms and predict their movements. This model, introduced in a paper pre-published on arXiv, is specifically designed to enhance the safety of robots while they are collaborating or interacting with humans.

“Motivated by the need to anticipate accidents during (HRI), we explore a framework that improves the safety of people working in close proximity to robots,” Djamel H. Sadok, one of the researchers who carried out the study, told TechXplore. “Pose detection is seen as an important component of the overall solution. To this end, we propose a new architecture for Pose Detection based on Self-Calibrated Convolutions (SCConv) and Extreme Learning Machine (ELM).”