Toggle light / dark theme

Expensive travel bags should do more than look good, and German high-end luggage manufacturer Rimowa would seem to agree. The company has developed an electronic luggage tag which displays baggage info in the same format, size and appearance of typical paper labels, but on a digital screen built into the unit and located near the handle.

The Rimowa e-tag is similar to a device tested by British Airways in 2013, which allowed travelers to attach it to any piece of luggage.

Travelers these days can easily check into a flight and secure a boarding pass, printed or digital, before they step foot in the airport. Despite that convenience, they’re often forced to stand in line to check their bags. Those with a Rimowa electronic tag-enabled bag can send their digital boarding info via Bluetooth from their smartphone to check their bag before they leave home, with details appearing on the bag’s electronic display. After arriving at the airport, they simply hand it off at the airline’s automated check-in station, avoiding at least one line.

Read more

Physicists working with a powerful observatory on Earth announced Thursday that they have finally detected ripples in space and time created by two colliding black holes, confirming a prediction made by Albert Einstein 100 years ago.

These ripples in the fabric of space-time, called gravitational waves, were created by the merger of two massive black holes 1.3 billion years ago. The Laser Interferometer Gravitational-Wave Observatory (LIGO) on Earth detected them on Sept. 14, 2015, and scientists evaluated their findings and put them through the peer review process before publicly disclosing the landmark discovery today.

SEE ALSO: Einstein was right: Scientists detect gravitational waves for the first time.

Read more

Again, I see too many gaps that will need to be address before AI can eliminate 70% of today’s jobs. Below, are the top 5 gaps that I have seen so far with AI in taking over many government, business, and corporate positions.

1) Emotion/ Empathy Gap — AI has not been designed with the sophistication to provide personable care such as you see with caregivers, medical specialists, etc.
2) Demographic Gap — until we have a more broader mix of the population engaged in AI’s design & development; AI will not meet the needs for critical mass adoption; only a subset of the population will find will connection in serving most of their needs.
3) Ehtics & Morale Code Gap — AI still cannot understand at a full cognitive level ethics & empathy to a degree that is required.
4) Trust and Compliance Gap — companies need to feel that their IP & privacy is protected; until this is corrected, AI will not be able to replace an entire back office and front office set of operations.
5) Security & Safety Gap — More safeguards are needed around AI to deal with hackers to ensure that information managed by AI is safe as well as ensure public saftey from any AI that becomes disruptive or hijacked to cause injury or worse to the public

Until these gaps are addressed; it will be very hard to eliminate many of today’s government, office/ business positions. The greater job loss will be in the lower skill areas like standard landscaping, some housekeeping, some less personable store clerk, some help desk/ call center operations, and some lite admin admin roles.


The U.S. economy added 2.7 million jobs in 2015, capping the best two-year stretch of employment growth since the late ‘90’s, pushing the unemployment rate down to five percent.

DARPA’s efforts to teach AI “Empathy & Ethics”


The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?

Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” — to be unveiled at the AAAI-16 Conference in Phoenix, Ariz. (Feb. 12 — 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”