Toggle light / dark theme

Copyright and Artificial Intelligence: An Exceptional Tale

As the US government begins to consider some of the legal implications for copyright in connection with the development and deployment of artificial intelligence, it is important to first step back to ensure that we are properly guided by context and a proper understanding of our goals — grounded in an informed grasp of the relationship of copyright to the development of AI, and a fair observation of the state of legal developments around the world. Far too many observers have oversimplified how various countries have addressed the relationship between copyright and AI. The reality is that all who have done so have rejected the notion that copyright is not implicated, and have developed legal norms which carefully limit the scope of any exceptions with an eye towards facilitating licensing, even when they seek to expand the development of AI as a national economic imperative.

I have written about the approach taken by the EU in the updated Copyright Directive, and note here that despite claims about Japan’s legislation, even their provisions — as manifested in the 2018 amendments, are designed to avoid conflict with the legitimate interests of copyright owners. While I don’t necessarily agree with Japan’s approach, it is important to highlight that even its exceptions, as I understand them: recognize that text and data mining/machine learning does in fact implicate copyright; apply only to materials that have been lawfully acquired; require that the use of each work is “minor” relative to the TDM effort; and provide that license terms must be honored. While it remains unclear to me that Japan’s goal of respecting copyright as required by international law has been achieved, it is important to understand that claims that Japan has removed copyright as an issue that must be addressed in the development of AI are inaccurate.

Smart home hubs leave users vulnerable to hackers

Machine learning programs mean even encrypted information can give cybercriminals insight into your daily habits.

Smart technology claims to make our lives easier. You can turn on your lights, lock your front door remotely and even adjust your thermostat with the click of a button.

But new research from the University of Georgia suggests that convenience potentially comes at a cost—your personal security.

MIT reveals a new type of faster AI algorithm for solving a complex equation

Researchers solved a differential equation behind the interaction of two neurons through synapses, creating a faster AI algorithm.

Artificial intelligence uses a technique called artificial neural networks (ANN) to mimic the way a human brain works. A neural network uses input from datasets to “learn” and output its prediction based on the given information.

Full Story:


Imaginima/iStock.

Recently, researchers from the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab (MIT CSAIL), have discovered a quicker way to solve an equation used in the algorithms for ‘liquid’ neural neurons.

MIT solved a century-old differential equation to break ‘liquid’ AI’s computational bottleneck

Last year, MIT developed an AI/ML algorithm capable of learning and adapting to new information while on the job, not just during its initial training phase. These “liquid” neural networks (in the Bruce Lee sense) literally play 4D chess — their models requiring time-series data to operate — which makes them ideal for use in time-sensitive tasks like pacemaker monitoring, weather forecasting, investment forecasting, or autonomous vehicle navigation. But, the problem is that data throughput has become a bottleneck, and scaling these systems has become prohibitively expensive, computationally speaking.

On Tuesday, MIT researchers announced that they have devised a solution to that restriction, not by widening the data pipeline but by solving a differential equation that has stumped mathematicians since 1907. Specifically, the team solved, “the differential equation behind the interaction of two neurons through synapses… to unlock a new type of fast and efficient artificial intelligence algorithms.”

“The new machine learning models we call ‘CfC’s’ [closed-form Continuous-time] replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” MIT professor and CSAIL Director Daniela Rus said in a Tuesday press statement. “CfC models are causal, compact, explainable, and efficient to train and predict. They open the way to trustworthy machine learning for safety-critical applications.”

Closed-form continuous-time neural networks

Physical dynamical processes can be modelled with differential equations that may be solved with numerical approaches, but this is computationally costly as the processes grow in complexity. In a new approach, dynamical processes are modelled with closed-form continuous-depth artificial neural networks. Improved efficiency in training and inference is demonstrated on various sequence modelling tasks including human action recognition and steering in autonomous driving.

Meta Introduces ‘Tulip,’ A Binary Serialization Protocol That Assists With Data Schematization

Meta introduces ‘Tulip,’ a binary serialization protocol supporting schema evolution. This simultaneously addresses protocol reliability and other issues and assists us with data schematization. Tulip has multiple legacy formats. Hence, it is used in Meta’s data platform and has seen a considerable increase in performance and efficiency. Meta’s data platform is made up of numerous heterogeneous services, such as warehouse data storage and various real-time systems exchanging large amounts of data and communicating among themselves via service APIs. As the number of AI and machine learning ML-related workloads in Meta’s system increase that use data for training these ML models, it is necessary to continually work on making our data logging systems efficient. The schematization of data plays a huge role in creating a platform for data at Meta’s scale. These systems are designed based on the knowledge that every decision and trade-off impacts reliability, data preprocessing efficiency, performance, and the engineer’s developer experience. Changing serialization formats for the data infrastructure is a big bet but offers benefits in the long run that make the platform evolve over time.

The Data Analytics Logging Library is present in the web tier and the internal services, and this is also responsible for logging analytical and operational data using Scribe-a durable message queuing system used by Meta. Data is read and ingested from Scribe, which also includes a data platform ingestion service and real-time processing systems. The data analytics reading library helps deserialize data and rehydrate it into a structured payload. Logging schemas are created, updated, and deleted every month by thousands of engineers at Meta, and these logging schema data flows in petabytes range each and every day over Scribe.

Schematization is necessary to ensure that any message logged in the past, present, or future, depending on the (de) serializer’s version, can be reliably (de)serialized at any time with the utmost fidelity and no data loss. Safe schema evolution via backward and forward compatibility is the name given to this characteristic. The article’s main focus lies on the on-wire serialization format used to encode the data that is finally processed by the data platform. Compared to the two serialization formats previously utilized, Hive Text Delimited and JSON serialization, the new encoding format is more efficient, requiring 40 to 85 percent fewer bytes and 50 to 90 percent fewer CPU cycles to (de)serialize data.

This Comic Series Is Gorgeous. You’d Never Know AI Drew the Whole Thing

You might expect a comic book series featuring art generated entirely by artificial intelligence to be full of surreal images that have you tilting your head trying to grasp what kind of sense-shifting madness you’re looking at.

Not so with the images in The Bestiary Chronicles, a free, three-part comics series from Campfire Entertainment, a New York-based production house focused on creative storytelling.

The visuals in the trilogy — believed to be the first comics series made with AI-assisted art — are stunning. They’re also stunningly precise, as if they’ve come straight from the hand of a seasoned digital artist with a very specific story and style in mind.

Transforming bacterial cells into living artificial neural circuits

Bringing together concepts from electrical engineering and bioengineering tools, Technion and MIT scientists collaborated to produce cells engineered to compute sophisticated functions— biocomputers of sorts.

Graduate students and researchers from Technion—Israel Institute of Technology Professor Ramez Daniel’s Laboratory for Synthetic Biology & Bioelectronics worked together with Professor Ron Weiss from the Massachusetts Institute of Technology to create genetic “devices” designed to perform computations like artificial neural circuits. Their results were recently published in Nature Communications.

The was inserted into the bacterial cell in the form of a plasmid: a relatively short DNA molecule that remains separate from the bacteria’s “natural” genome. Plasmids also exist in nature, and serve various functions. The research group designed the plasmid’s genetic sequence to function as a simple computer, or more specifically, a simple artificial neural network. This was done by means of several genes on the plasmid regulating each other’s activation and deactivation according to outside stimuli.

/* */