Toggle light / dark theme

D A I J I W O R L D

Although this article highlights the robots used in the courts across India; robots in the courtroom has also been proposed in the US. So, if we ask ourselves “Can robots take over court cases and reduce the overloaded burden of the court system?” In some areas as a legal assistant, or paralegal to support the attorney/s; I would say yes.

However, taking over full ownership of a case. Well, that gets tricky in the US. For example, I am a client and I have a robot representing me. I lose my court case. So, can I claim misrepresentation under the current laws? You bet I can.

What do we need to do so that the laws enable robots the same level of recogonition/ equality as a human attorney has today? In order for this type of recogonition/ equality to exist; many laws on the books (state, county, city, and federal) will need to be reviewed, admendments proposed, and a vote done on all of them. Which will take a very very long time. Because the volume of laws in the city and county in some areas like NYC, Chicago, etc. is very extensive and expensive to taxpayers.

Again, we must be very pragmatic at this point before stating that by 2020 the courts will be nothing; but a judge a plantiff with a robot, etc.


According to a report in The New Indian Express (8−2−16), titled ‘Supreme Court Talks Tough on Frivolous Pleas’, the Court has come down heavily on litigants who prolong cases by filing frivolous applications. The Bench headed by Justice Dipak Misra said: “The Indian judicial system is grossly afflicted with frivolous litigation. Ways and means need to be evolved to deter litigants from their compulsive obsession towards senseless and ill-considered claims. One needs to keep in mind that in the process of litigation, there is an innocent sufferer on the other side of every irresponsible and senseless claim.”

Justice Misra blamed litigant’s ‘compulsive obsession’ without blaming the role of some irresponsible and self-serving lawyers in instigating/encouraging litigation and prolonging it through unending adjournments that lead to clogging the Indian justice delivery system. Take, for instance, a case thrown out by Sitamgarh Chief Judicial Magistrate on February 1, 2016 — a petition, by advocate Chandan Kumar Singh, against Lord Rama and his brother Laxman over banishing goddess Sita to exile in a forest, with the judge saying that the issue is ‘beyond logic and facts’. Meanwhile, three cases have been filed in the same court against Singh for his ‘defamatory’ acts against the Almighty. It has admitted the cases under various Sections of Indian Penal Code. Thus, the tamasha goes on!

Thankfully this case is handled at the district level. But, in the past, apparently instigated/ encouraged by the concerned lawyer, the case of a performing Himalayan sloth bear, Munna, owned by Nasir Khan, who was charged under Wildlife Act, reached the Supreme Court. He lost. But, it shows how litigants, apparently goaded on by lawyers, rush to the courts and clog them, thus preventing them from handling genuine cases.

Read more

US highway authorities concede that artificial intelligence can legally ‘drive’ a car

In a major step forward for self-driving cars and the industry seeking to manufacture them, US highway authorities have informed Google that its autonomous vehicle systems could qualify as a “driver” in the eyes of the law.

A letter addressed to the company from the National Highway Traffic Safety Administration (NHTSA) last week suggests that if self-driving vehicles (SDVs) can satisfy a number of safety standards, the fact that artificial intelligence (AI) is controlling the car – in the absence of any human controls – would not be a barrier to the car legally driving on US roads.

“We agree with Google its SDV will not have a ‘driver’ in the traditional sense that vehicles have had drivers during the last more than one hundred years,” writes chief counsel for the NHTSA, Paul A. Hemmersbaugh. “If no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the ‘driver’ as whatever (as opposed to whoever) is doing the driving. In this instance, an item of motor vehicle equipment, the [SDS Self-Driving System], is actually driving the vehicle.”

Read more

Wall Street Is Trying to Beat Silicon Valley at Its Own Game

I have worked in both tech and in Wall Street firms. One thing about Wall Street (WS) is that WS knows legal & compliance, trading, and financials better than just about anyone. And, tech is an industry can do innovation better than just about anyone as well as build world class businesses from the ground up. So, it will be interesting to see how these 2 titan industries play out.


Banks race to beat the patent trolls—and Silicon Valley.

Read more

Investment platforms must get back in the game

Good article and perspective. And, I believe areas like Finance and Legal will be addressed over the next 5 to 7 years with AI. However, much of our critical needs are in healthcare particularly medical technology and Infrastructure (including security); and these need to get upgraded and improved now.


I recently read a thought provoking article by Klaus Schwab, called ‘The Fourth Industrial Revolution: what it means, how to respond’. At the beginning of the article Schwab describes the first three industrial revolutions, which I think we’re all fairly familiar with:

1784 – steam, water and mechanical production equipment.

Read more

Who’s to Blame (Part 1): The Legal Vacuum Surrounding Autonomous Weapons

Future of Life Institute illustrate their objection to automated lethal robots:

“Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable. Unfortunately, no one can agree on who that is”


The year is 2020 and intense fighting has once again broken out between Israel and Hamas militants based in Gaza. In response to a series of rocket attacks, Israel rolls out a new version of its Iron Dome air defense system. Designed in a huge collaboration involving defense companies headquartered in the United States, Israel, and India, this third generation of the Iron Dome has the capability to act with unprecedented autonomy and has cutting-edge artificial intelligence technology that allows it to analyze a tactical situation by drawing from information gathered by an array of onboard sensors and a variety of external data sources. Unlike prior generations of the system, the Iron Dome 3.0 is designed not only to intercept and destroy incoming missiles, but also to identify and automatically launch a precise, guided-missile counterattack against the site from where the incoming missile was launched. The day after the new system is deployed, a missile launched by the system strikes a Gaza hospital far removed from any militant activity, killing scores of Palestinian civilians. Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable. Unfortunately, no one can agree on who that is…

Much has been made in recent months and years about the risks associated with the emergence of artificial intelligence (AI) technologies and, with it, the automation of tasks that once were the exclusive province of humans. But legal systems have not yet developed regulations governing the safe development and deployment of AI systems or clear rules governing the assignment of legal responsibility when autonomous AI systems cause harm. Consequently, it is quite possible that many harms caused by autonomous machines will fall into a legal and regulatory vacuum. The prospect of autonomous weapons systems (AWSs) throws these issues into especially sharp relief. AWSs, like all military weapons, are specifically designed to cause harm to human beings—and lethal harm, at that. But applying the laws of armed conflict to attacks initiated by machines is no simple matter.

The core principles of the laws of armed conflict are straightforward enough. Those most important to the AWS debate are: attackers must distinguish between civilians and combatants; they must strike only when it is actually necessary to a legitimate military purpose; and they must refrain from an attack if the likely harm to civilians outweighs the military advantage that would be gained. But what if the attacker is a machine? How can a machine make the seemingly subjective determination regarding whether an attack is militarily necessary? Can an AWS be programmed to quantify whether the anticipated harm to civilians would be “proportionate?” Does the law permit anyone other than a human being to make that kind of determination? Should it?

Read more

DARPA researchers to push limits of reading, writing brain neurons

DARPA is making great progress on their research on mapping and understanding the human brain. Recently they are working on a project that break’s Stevenson’s Law. Stevenson Law states that the number of neurons that can be recorded simultaneously will double every seven years, and currently sits at about 500 neurons; however, DARPA’s goal is to take it to 1 million neurons. Which means taking Brain-Mind Interface capabilities to a level where anyone or anything with this technology can outperform and control machines like we only dream about.


This week neuroscientists met with DARPA in Arlington, Virginia, to embark on a project breaking Stevenson’s Law.

Read more

Are you covered? Emerging issues for health care providers under cyber risk insurance

Tough to be a doctor these days — Could be bad news for Providers with limited or no Cyber Risk Coverage.


Providers are focusing on cybersecurity with increased urgency. Cyberattacks on health-care organizations reached an all-time high in 2015 and aren’t expected to slow down in 2016, Harry Greenspun, director for Deloitte’s Center for Health Solutions, told Bloomberg BNA. One element of a comprehensive strategy to address data security is customized cyber risk insurance. Recent case law supports standing for class action litigants alleging future injuries, which may not be covered by some policy forms. We urge providers to review their cyber risk coverage with the increasing risks and this new case law in mind.

Specifically, it is critical that cyber risk insurance is designed to both: adequately mitigate future harm to those whose private information is compromised as a result of a data breach; and satisfy the full array of damages sought by such third parties, including damages for future injuries resulting from the anticipated improper use of data. These considerations are increasingly important because the policies available in today’s market are not standardized. While many absorb some of the costs associated with notification and fraud monitoring, existing forms may not protect against damages sought for susceptibility to identity theft.

The Remijas decision

Last fall, the Seventh Circuit reviewed the “substantial risk” standard for Article III class action standing in Remijas v. Neiman Marcus Group and held that even a 2.5 percentage of compromised credit card holders is enough to show a substantial risk to an entire universe of credit card holders with breached data. 794 F.3d 688, 693 (7th Cir. 2015).

Read more

The Race To Mine Asteroids Gains International Support

Welcome to the future, folks.

There is more mineral wealth just floating around our solar system than a million times the amount Earth EVER contained. As these asteroids are mined, tunnels will be built, forming the basis for a space station and/or colony. This fact more than doubling the usefulness of the entire operation, AND it’s return on investment. I think THIS is going to be the way in which we’ll begin to colonize our Solar System. Also, a lot of the hazardous (to the environment, human beings, and/or both) that are an inevitable byproduct of heavy industry on our planet could be exported to stations like these, tripling the value of the entire operation.


Today, the Luxembourg Government announced that they are taking steps to become Europe’s hub for mining space resources.

The small European country plans to establish the necessary legal and regulatory framework and invest in related research and development projects. They’re even considering investing in already-established asteroid mining companies like the U.S. based Deep Space Industries and Planetary Resources.

This announcement comes shortly after the United States took a huge step forward in making commercial space mining legal. President Obama signed the U.S. Commercial Space Launch Competitiveness Act (CSLCA) in November, which stated that U.S. companies are entitled to maintain property rights of resources they’ve obtained from outer space.

/* */