Limited mileage range and rare mineral mining have been sticking points for electric vehicle batteries, but Tesla is working on a design to last 100 years.
Category: transportation – Page 206
(2022). Journal of Experimental & Theoretical Artificial Intelligence. Ahead of Print.
AI has for decades attempted to code commonsense concepts, e.g., in knowledge bases, but struggled to generalise the coded concepts to all the situations a human would naturally generalise them to, and struggled to understand the natural and obvious consequences of what it has been told. This led to brittle systems that did not cope well with situations beyond what their designers envisaged. John McCarthy (1968) said ‘a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows’; that is a problem that has still not been solved. Dreifus (1998) estimated that ‘Common sense is knowing maybe 30 or 50 million things about the world and having them represented so that when something happens, you can make analogies with others’. Minsky presciently noted that common sense would require the capability to make analogical matches between knowledge and events in the world, and furthermore that a special representation of knowledge would be required to facilitate those analogies. We can see the importance of analogies for common sense in the way that basic concepts are borrowed, e.g., the tail of an animal, or the tail of a capital ‘Q’, or the tail-end of a temporally extended event (see also examples of ‘contain’, ‘on’, in Sec. 5.3.1). More than this, for known facts, such as ‘a string can pull but not push an object’, an AI system needs to automatically deduce (by analogy) that a cloth, sheet, or ribbon, can behave analogously to the string. For the fact ‘a stone can break a window’, the system must deduce that any similarly heavy and hard object is likely to break any similarly fragile material. Using the language of Sec. 5.2.1, each of these known facts needs to be treated as a schema,14 and then applied by analogy to new cases.
Projection is a mechanism that can find analogies (see Sec. 5.3.1) and hence could bridge the gap between models of commonsense concepts (i.e., not the entangled knowledge in word embeddings learnt from language corpora) and text or visual or sensorimotor input. To facilitate this, concepts should be represented by hierarchical compositional models, with higher levels describing relations among elements in the lower-level components (for reasons discussed in Sec. 6.1). There needs to be an explicit symbolic handle on these subcomponents; i.e., they cannot be entangled in a complex network. For visual object recognition, a concept can simply be a set of spatial relations among component features, but higher concepts require a complex model involving multiple types of relations, partial physics theories, and causality. Secs. 5.2 and 5.3 give a hint of what these concepts may look like, but a full example requires a further paper.
Moving beyond the recognition of individual concepts, a complete cognitive system needs to represent and simulate what is happening in a situation, based on some input, e.g., text, visual. This means instantiating concepts in some workspace to flesh out relevant details of a scenario. Sometimes very little data is available for some part of a scenario, and it must be imagined. For example, suppose some machine in a wooden casing moves smoothly across a surface, but the viewer cannot see what mechanism is on the underside, the viewer may conjecture it rolls on wheels, and if it gets stuck one may imagine a wheel hitting a small stone. This type of imagination is another projection: assuming a prior model of a wheeled vehicle is available, then the parts of this can be projected to positions in the simulation (parts unseen in the actual scenario). Similarly for a wheel hitting a stone: a schema abstracted from a previously experienced episode of such an occurrence can serve as a model. Simulation and projection must work together to imagine scenarios, because an unfolding simulation may trigger new projections. If the simulation is of something happening in the present, then sensor data can enter to constrain the possibilities for the simulation. The importance of analogy for this kind of reasoning in a human-level cognitive agent has also been recognised by other AI researchers (K. D. Forbus & Hinrichs, 2006 ; Forbus et al., 2008).
The mainstream approach to driverless cars is slow and difficult. These startups think going all-in on AI will get there faster.
All I can say is that I hope his self indulgence for his favorite ☆HOBBY☆ — Twitter itself — doesn’t sabotage the interplanetary future he’s defined and actually begun to to successfully realize, doing so against all odds in so many fields, cas diverse as science, engineering, economics, politics, and the recent history and the seeming decline in public enthusiasm, funding, and any sort of clear direction. He didn’t just subvert those roadblocks, he OBLITERATED them. SPECTACULARLY.
All that progress and innovation can and WILL be undone in seconds if he makes himself into an allie of a republican party that has abandoned truth, abandoned science, and abandoned every semblance of honor, loyalty, and reason.
A republican party that has abandoned Democracy ITSELF.
Twitter shareholders have filed a lawsuit accusing Elon Musk of engaging in “unlawful conduct” aimed at sowing doubt about his bid to buy the social media company.
Is this true?
The Perseverance rover has been on Mars for two weeks and has now spun its wheels and began its maiden trek over the red planet’s surface. According to new images transmitted to Earth by the one-ton robot on Friday, the voyage was a quick one.
Engineers have worked tirelessly to get the vehicle and its numerous equipment up and operating, including instruments and a robotic arm. Perseverance’s mission is to look for indications of alien life in the Jezero crater, which is located near the equator. This will take roughly 15 kilometers throughout the following Martian year (approximately two Earth years).
Scientists want to gain access to a series of rock formations in the crater that might provide traces of ancient biological activity. According to satellite images, one of them appears to be a delta, a structure consisting of silt and sand pushed up by a river as it reaches a larger body of water. In Jezero’s case, this greater mass was most likely a crater-wide lake that existed billions of years ago. However, Perseverance must first undertake an experiment before they can begin.
Millions of lasers shot from a helicopter flying over the Amazon basin have revealed evidence of unknown settlements built by a “lost” pre-Hispanic civilization, resolving a long-standing scientific debate about whether the region could sustain a large population, a new study finds.
The findings indicate the mysterious Casarabe people — who lived in the Llanos de Mojos region of the Amazon basin between A.D. 500 and 1,400 — were much more numerous than previously thought, and that they had developed an extensive civilization that was finely adapted to the unique environment they lived in, according to the study, published online Wednesday (May 25) in the journal Nature (opens in new tab).
MENLO PARK (KCBS RADIO) – The Menlo Park Police Department has just received a delivery of the first of its electric patrol cars.
For more, stream KCBS Radio now.
Police volunteers in the department will be part of a pilot program to test the Tesla Y’s as patrol vehicles, starting with three, and eventually, the city is hoping to have all its vehicles be electric by 2030.
Artificial intelligence (AI) is spreading through society into some of the most important sectors of people’s lives – from health care and legal services to agriculture and transportation.1 As Americans watch this proliferation, they are worried in some ways and excited in others.
In broad strokes, a larger share of Americans say they are “more concerned than excited” by the increased use of AI in daily life than say the opposite. Nearly half of U.S. adults (45%) say they are equally concerned and excited. Asked to explain in their own words what concerns them most about AI, some of those who are more concerned than excited cite their worries about potential loss of jobs, privacy considerations and the prospect that AI’s ascent might surpass human skills – and others say it will lead to a loss of human connection, be misused or be relied on too much.
But others are “more excited than concerned,” and they mention such things as the societal improvements they hope will emerge, the time savings and efficiencies AI can bring to daily life and the ways in which AI systems might be helpful and safer at work. And people have mixed views on whether three specific AI applications are good or bad for society at large.