robotics/AI – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Thu, 02 Jul 2020 20:25:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Software Wars, The Movie, Free download https://lifeboat.com/blog/2020/07/software-wars-the-movie-free-download https://lifeboat.com/blog/2020/07/software-wars-the-movie-free-download#comments Thu, 02 Jul 2020 20:25:36 +0000 https://lifeboat.com/blog/?p=109424

Software Wars is a 70 minute documentary about the ongoing battle between proprietary versus free and open-source software. The more we share scientific information, the faster we can solve the challenges of the future. It also discusses biology and the space elevator.

Here is the feature trailer:

For now, you can watch the movie for free or download it via BitTorrent here: https://video.detroitquaranteam.com/videos/watch/07696431&#4…ac9c7d22b1

]]>
https://lifeboat.com/blog/2020/07/software-wars-the-movie-free-download/feed 1
Beyond Genuine Stupidity – Making Smart Choices About Intelligent Infrastructure https://lifeboat.com/blog/2020/01/beyond-genuine-stupidity-making-smart-choices-about-intelligent-infrastructure Thu, 16 Jan 2020 18:44:22 +0000 https://lifeboat.com/blog/?p=100933

We’re at a fascinating point in the discourse around artificial intelligence (AI) and all things “smart”. At one level, we may be reaching “peak hype”, with breathless claims and counter claims about potential society impacts of disruptive technologies. Everywhere we look, there’s earnest discussion of AI and its exponentially advancing sisters – blockchain, sensors, the Internet of Things (IoT), big data, cloud computing, 3D / 4D printing, and hyperconnectivity. At another level, for many, it is worrying to hear politicians and business leaders talking with confidence about the transformative potential and societal benefits of these technologies in application ranging from smart homes and cities to intelligent energy and transport infrastructures.

Why the concern? Well, these same leaders seem helpless to deal with any kind of adverse weather incident, ground 70,000 passengers worldwide with no communication because someone flicked the wrong switch, and rush between Brexit crisis meetings while pretending they have a coherent strategy. Hence, there’s growing concern that we’ll see genuine stupidity in the choices made about how we deploy ever more powerful smart technologies across our infrastructure for society’s benefit. So, what intelligent choices could ensure that intelligent tools genuinely serve humanity’s best future interests.

Firstly, we are becoming a society of connected things with appalling connectivity. Literally every street lamp, road sign, car component, object we own, and item of clothing we wear could be carrying a sensor in the next five to ten years. With a trillion plus connected objects throwing off a continuous stream of information – we are talking about a shift from big to humungous data. The challenge is how we’ll transport that information? For Britain to realise its smart nation goals and attract the industries of tomorrow in the post-Brexit world, it seems imperative that we have broadband speeds that puts us amongst the five fastest nations on the planet. This doesn’t appear to be part of the current plan.

The second issue is governance of smart infrastructure. If we want to be driverless pioneers, then we need to lead on thinking around the ethical frameworks that govern autonomous vehicle decision making. This means defining clear rules around liability and choice making on who to hit in accident. Facial recognition technology allows identification of most potential victims and vehicles could calculate instantly our current and potential societal contribution. The information is available, what will we choose to do with it? Similarly, when smart traffic infrastructures know who is driving, and drones can allow individualised navigation, how will we use their information in traffic management choices? In a traffic jam, who will be allowed onto the hard shoulder? Will we prioritise doctors on emergency calls, executives of major employers, or school teachers educating our young?

At the physical level, globally we see experiments with innovations such as solar roadways, and self-monitoring, self-repairing surfaces. We can of course wait until these technologies are proven, commercialised, and expensive. Or, we can recognise the market opportunity of piloting such innovations, accelerate the development of the ventures that are commercialising them, deliver genuinely smarter infrastructure in advance, of many competitor nations, and create leadership opportunities in these new global markets.

The final issue I’d like to highlight is that of speed. Global construction firms are delivering 57 storey buildings in 19 days and completing roadways in China and Dubai at three to four times the speed of the UK. The capabilities exist, the potential for exponential cost and time savings are evident. We can continue to find genuinely stupid reasons not to innovate or give ourselves permission to experiment with these new techniques. Again, the results would be enhanced infrastructure provision to UK society whilst at the same creating globally exportable capabilities.

As we look to the future, it will become increasingly apparent that the payoff from smart infrastructure will be even more dependent on the intelligence of our decision making than on the applications and technologies we deploy.

ABOUT THE AUTHOR

Rohit Talwar is a global futurist, award-winning keynote speaker, author, and the CEO of Fast Future. His prime focus is on helping clients understand and shape the emerging future by putting people at the center of the agenda. Rohit is the co-author of Designing Your Future, lead editor and a contributing author for The Future of Business, and editor of Technology vs. Humanity. He is a co-editor and contributor for the recently published Beyond Genuine Stupidity – Ensuring AI Serves Humanity and The Future Reinvented – Reimagining Life, Society, and Business, and two forthcoming books — Unleashing Human Potential – The Future of AI in Business, and 50:50 – Scenarios for the Next 50 Years.

Image credit: https://pixabay.com/images/id-2564057/ by Stock Snap

 This article was published in FutureScapes. To subscribe, click here.

]]>
How AI Might Help Us Decode Our World https://lifeboat.com/blog/2019/05/how-ai-might-help-us-decode-our-world Wed, 29 May 2019 18:14:37 +0000 https://lifeboat.com/blog/?p=91406
Creative Commons image: https://pixabay.com/images/id-1841550/

Popular films like “Her” and series such as “Black Mirror” depict a future of intimate relationships in a high-tech world: Man falls in love with operating system, woman loves person she meets in virtual reality. The rise of technologies like artificial intelligence (AI) may play a huge role in the future of our interpersonal relationships. Hardware, such as robots we could touch and feel, are one example of what this AI could look like; another would be software, or algorithms that take on a persona like Alexa or Siri and can seemingly interact with us.

Beyond overused sci-fi clichés, there’s great potential for AI to increase the authenticity and value of real human relationships. Below are some impressions of how AI might enhance the quality of friendship, romantic and professional relationships.

Dating

Men are from Mars and women are from Venus, but AI can be programmed to translate, helping circumvent missteps in love. Algorithms as key matchmakers in the future of dating might provide the support and information people need beyond the first date. For example, an AI personal assistant may give insights on how to approach someone for a second date, based on information culled from the first meeting, the internet and various digital databases. Soon, one’s tweets, likes and Facebook circle of friends could be used to build our dating profile and then a fool-proof guide to dating the other person.

Imagine a Netflix for dates, informing you of the right restaurants to suggest for a certain someone (based on their biological profiles, DNA tests or other obtainable digital data about them) or narrowing down your choice of bars and cafes based on the probability of meeting singles with a certain Myers-Briggs profile. Whilst on a date, our AI assistant could be interpreting micro-facial expressions, and suggesting underlying meanings and desires in what the other person is saying. The technology could also relay real-time video to our inner circle of friends — collating and prioritising advice from them and dating guides across the web. We need never be lost for words or misinterpret a cue again.

Family

Robots used in caring for the elderly is a no-brainer in places like Japan where the population is aging rapidly and there is a shortage of caregivers. However, it is possible that AI will one day help us communicate and relate better with our elderly friends, relatives and neighbours. Hearing and speech enhancement is a major area that AI will impact—in fact, teaching robots to listen and respond to human speech is an essential aspect of moving AI into our homes and workplaces. Facial recognition and reading body language are among some of the cutting-edge capabilities of AI that could enhance elder care. It is possible that future AI programs will help us not just care for the older people in our lives in a superficial way; today we are familiar with the ability to harness technology for medication reminders, virtual doctor visits and obtaining information used for at-home care. In the near future, AI might keep older people company in the absence of a caring adult or help caregivers understand illness and injury with more empathy. In a more distant future, the ability to upload memories to the cloud could make the impacts of Alzheimer’s obsolete—AI could help patients recall past events and make sense of the present. Combined with VR (virtual reality) and AR (augmented reality), we may reach breakthroughs with AI when it comes to understanding the aging experience and avoiding its pitfalls, such as loneliness, communication problems and memory loss.

Friendships

In the age of social media one can have hundreds of online connections with no real friends in sight in the ‘real world’. Loneliness is an epidemic, and surveys have reported that people believe the number of flesh-and-blood ‘friends’ they can count on in times of need is decreasing, compared to past samples. Technology does not have to alienate us from each other, though, and at Fast Future we emphasize the role of technology in enhancing humanity, not diminishing it. So how could AI help us in our friendships? First of all, guarding special relationships takes the tact and care that can be difficult for some people and at certain times during life. Various uses of AI, like voice detection, could help us learn how to treat a friend who calls to casually ‘say hi’, but whose voice holds fear or anxiety undetectable to the human ear. Friendships might be less private, but more authentic with such a technology. On the other hand, the art of the ‘little white lie’ could be perfected by some device which could let us know when bending the truth might preserve a relationship. Conversely, how many friendships would survive a lie detector enabled on every conversation?

Career

Could AI help you ask for a raise one day? It’s possible our digital twins, our futuristic personal assistants that mirror our thoughts, actions and activities, might make appropriate suggestions along our career paths which help us get ahead. Digital twins might look out for us by comparing salary data in our fields, for example, providing both moral and evidential support to the big ask. Furthermore, AI-powered services could suggest, provide and track professional development training to help instil confidence and overcome weaknesses. As a job coach, AI might provide valuable assistance to jobseekers as well as support people on the job to maintain credentials. Competition in the job market will be fierce once automation takes hold of a range of white-collar jobs. AI working to advance humanity in the workplace would be a win-win for organisations and employees alike. Career support is one application for technology that would enhance the human role in the workplace, while positioning AI in a manner which is not overpowering or threatening.

Ultimately the role of AI in the future of society is still to be determined. Whilst futurists and other early adopters are busy talking up the benefits of AI, new risks are exposed every day. Self-driving cars could reduce the number of lives lost in car accidents, but could they cause the demise of repair garages and auto insurance firms? Algorithms that can predict start-up success rates are handy, but could they ultimately quash innovation? It’s fascinating to see the artwork created by a robot, but what about human creativity—and preserving those qualities that make us human? Given the profit motive, AI is already out of the bag. But how we use it, and whether it is harnessed to enhance human potential are ultimately choices that we as humans have to make.

About the authors

The authors are futurists with Fast Future who specialise in studying and advising on the impacts of emerging change. Fast Future also publishes books from future thinkers around the world exploring how developments such as AI, robotics and disruptive thinking could impact individuals, society and business and create new trillion-dollar sectors. Fast Future has a particular focus on ensuring these advances are harnessed to unleash individual potential and enable a very human future. See: www.fastfuture.com

Rohit Talwar is a global futurist, keynote speaker, author, and CEO of Fast Future where he helps clients develop and deliver transformative visions of the future. He is the editor and contributing author for The Future of Business, editor of Technology vs. Humanity

Steve Wells is the COO of Fast Future and an experienced Strategist, Futures Analyst, and Partnership Working Practitioner. He is a co-editor of The Future of Business, Technology vs. Humanity, and a forthcoming book on Unleashing Human Potential–The Future of AI in Business.

April Koury is a foresight researcher, writer, and publishing director at Fast Future. She is a contributor to The Future of Business, and a co-editor of Technology vs. Humanity, and a forthcoming book on 50:50–Scenarios for the Next 50 Years.

Alexandra Whittington is the foresight director at Fast Future, a futurist, writer, and faculty member on the Futures programme at the University of Houston. She is a contributor to The Future of Business and a co-editor for forthcoming books on Unleashing Human Potential–The Future of AI in Business and 50:50–Scenarios for the Next 50 Years.

Maria Romero is a recent graduate from the University of Houston Master in Foresight, a futurist, and researcher. As a student, she worked with Dr Andy Hines developing new tools for the Framework Foresight method and scanning process. She has worked on projects for consultants, NGOs, for-profit organizations, and government. She is currently working on a study of AI in business.

]]>
Robo-Retail vs. Humanity at a Price? Two Possible Futures for Retail https://lifeboat.com/blog/2019/03/robo-retail-vs-humanity-at-a-price-two-possible-futures-for-retail Mon, 25 Mar 2019 19:15:42 +0000 https://lifeboat.com/blog/?p=88928 By Rohit Talwar, Steve Wells, April Koury, and Alexandra Whittington

Image https://pixabay.com/images/id-4045661/ by geralt is licensed under CC0

Can human roles in retail survive the relentless march of the robots? Much of the current debate on automation focuses on the possible demise of existing jobs and the spread of automation into service and white-collar sectors – and retail is certainly one industry poised to follow this automation path in pursuit of the next driver of profits. From the advent of the steam engine and mechanisation of farming, through to the introduction of personal computing — jobs have always been automated through the use of technology. However, as new technologies have come to market, human ingenuity and the ability to create new products and services have increased the scope for employment and fulfilment. Retail has enjoyed enormous benefits from technology tools, but has the time come when automation poses a threat to jobs?  Here we present two possible scenarios for retail 2020–2025: one where automation eliminates the majority of retail jobs and a second which sees the emergence of new paid roles in retail.

Scenario One: Robo-Retail Rules

By 2020, in-store robots walk the aisles to guide customers, help order from another branch, and bring goods to the checkout, or your car. Artificial intelligence (AI) personal assistants like Siri and Alexa have become personal shoppers, with perfect knowledge of customers’ tastes and preferences. This allows for development of retail algorithms to recommend the perfect item before shoppers even know they want it. The algorithms offer recommendations drawing on databases of consumer preferences (i.e. Amazon recommendations), social media, friends’ recent purchases, and analysis of emerging trends – with our AI assistants providing our profiles to help filter and select the appropriate offers.

By 2022, many stores try to retain humans in key service roles for customers who want the personal touch, but most customers prefer to shop online – even if they still browse in-store. Mobile and pop-up digital stores and malls – where customers view products digitally – display selected items as touchable and sniffable holograms personalised to you. Wealthier customers can book a personal visit by an autonomous vehicle, robot or drone which can then perform the holographic display in the comfort of your own home or garden – giving birth to the next wave of home shopping parties. TV and retail are fully integrated — the majority of films and TV shows offer the ability to click on an item in the show, view it in more detail, see how we would look wearing it or how it might look in our home and then make an instant purchase. In all these formats, shoppers ‘click to buy’ virtual items, which are shipped instantly by autonomous vehicle or drone.

By 2025 The physical stores that continue to attract customers do so with high-tech in-store experiential services. In-store 3D/4D printing and spray-on manufacture of items to your design are commonplace. Experiences include multi-sensory immersive fashion displays, mirrors showing customers wearing an item of clothing under different lighting, in different colours and sizes and robot tailors customising clothing to our requirements while we wait. The sharing economy is well advanced by now so that, at the point of purchase for many items, we already have a community who will share the ownership and cost of purchase with us.

Scenario Two: Humanity at a Price

By 2020, retailers use AI to determine who typically shops and when, and change displays so that eye-catching items are offered to relevant customers walking through at the time of day they typically visit. This would work particularly well in train stations and airports when you have a sense of which high spending passenger groups are coming through. In-store robots and drones could continuously change displays, alleviating the repetitive, physically exhausting work of retail jobs. Employees would therefore be more relaxed, thus placing more attention on the customer. Local stores might use AI apps to track the preferences of their customers, make recommendations, and deliver items at the perfect time – so shopping is completely seamless and tailored to specific customer needs. This is the edge by which small brick-and-mortar shops are able to compete with online retailers and bigger chains.

By 2022, people are willing to pay a premium to access a live purchasing advisor, someone who is an expert in a certain line of retail. This exclusivity leads to super elite retail boutiques. Part of what these shops offer would be a service where shoppers connect with fashion bloggers, Instagram idols, or YouTube artists whose fashion sense they admire. Customer service is anything but free, but well worth the cost to these shoppers. Creativity, self-expression and individuality are major retail offerings in this future. For example, 3D print stores could help shoppers design an item, print it while they lunch, for them to collect on departure. The value-add of retail work would be the personal touch and connection in creating and selecting personalised products. In this future, services and guides become increasingly important in shopping experiences especially in destination shopping centres and malls, keeping retail jobs in demand.

By 2025 automation’s impact may support retail growth: products could become so cheap, thanks to extremely low-cost, highly-productive robotic labour, that the value comes in the form of an evolved ‘Personal Shopper’. Automation and robotics would support the actual purchase and delivery, but a Personal Shopper provides emotional support and companionship on the shopping experience: “That suits you so well” … “Why not cook prawns as a starter if chicken is your main course?” In a future where the majority of people are involved in online schooling and remote working, this Personal Shopper service could meet cravings for personal contact.

Two Retail Futures

There is little debate that robots will take jobs — hence both scenarios assume that the future leads to the automation of current retail roles. Companies must avoid the temptation to plug in technology fixes where human solutions are needed, and this is especially true for retail. The value of a good, authentic conversational style or a sense of humour is something that even today puts certain retail workers at an advantage. Public-facing jobs are a test of social skills, which seem to be safely in the domain of people, not robots, for now.

About the authors:

The authors are futurists with Fast Future who specialise in studying and advising on the impacts of emerging change. Fast Future also publishes books from future thinkers around the world exploring how developments such as AI, robotics and disruptive thinking could impact individuals, society and business and create new trillion-dollar sectors. Fast Future has a particular focus on ensuring these advances are harnessed to unleash individual potential and enable a very human future. See: www.fastfuture.com

Rohit Talwar is a global futurist, keynote speaker, author, and CEO of Fast Future where he helps clients develop and deliver transformative visions of the future. He is the editor and contributing author for The Future of Business, editor of Technology vs. Humanity, and co-editor of a forthcoming book on Unleashing Human Potential–The Future of AI in Business.

Steve Wells is the COO of Fast Future and an experienced Strategist, Futures Analyst, and Partnership Working Practitioner. He is a co-editor of The Future of Business, Technology vs. Humanity, and a forthcoming book on Unleashing Human Potential–The Future of AI in Business.

April Koury is a foresight researcher, writer, and publishing director at Fast Future. She is a contributor to The Future of Business, and a co-editor of Technology vs. Humanity, and a forthcoming book on 50:50–Scenarios for the Next 50 Years.

Alexandra Whittington is the foresight director at Fast Future. She is a futurist, writer, and faculty member on the Futures programme at the University of Houston. She is a contributor to The Future of Business and a co-editor for forthcoming books on Unleashing Human Potential–The Future of AI in Business and 50:50–Scenarios for the Next 50 Years.

]]>
The Recurring Parable of the AWOL Android https://lifeboat.com/blog/2012/09/the-recurring-parable-of-the-awol-android https://lifeboat.com/blog/2012/09/the-recurring-parable-of-the-awol-android#comments Sun, 09 Sep 2012 16:23:28 +0000 http://lifeboat.com/blog/?p=4908 Greetings to the Lifeboat Foundation community and blog readers! I’m Reno J. Tibke, creator of Anthrobotic.com and new advisory board member. This is my inaugural post, and I’m honored to be here and grateful for the opportunity to contribute a somewhat… different voice to technology coverage and commentary. Thanks for reading.

This Here Battle Droid’s Gone Haywire
There’s a new semi-indy sci-fi web series up: DR0NE. After one episode, it’s looking pretty clear that the series is most likely going to explore shenanigans that invariably crop up when we start using semi-autonomous drones/robots to do some serious destruction & murdering. Episode 1 is pretty and well made, and stars 237, the android pictured above looking a lot like Abe Sapien’s battle exoskeleton. Active duty drones here in realityland are not yet humanoid, but now that militaries, law enforcement, the USDA, private companies, and even citizens are seriously ramping up drone usage by land, air, and sea, the subject is timely and watching this fiction is totally recommended.

(Update: DR0NE, Episode 2 now available)

It would be nice to hope for some originality, and while DR0NE is visually and means-of-productionally and distributionally novel, it’s looking like yet another angle on a psychology & set of issues that fiction has thoroughly drilled — like, for centuries.

Higher-Def Old Hat?
Okay, so the modern versions go like this: one day an android or otherwise humanlike machine is damaged or reprogrammed or traumatized or touched by Jesus or whatever, and it miraculously “wakes up,” or its neural network remembers a previous life, or what have you. Generally the machine becomes severely bi-polar about its place in the universe; while it often struggles with the guilt of all the murderdeathkilling it did at others’ behest, it simultaneously develops some serious self-preservation instinct and has little compunction about laying waste to its pursuers, i.e., former teammates & commanders who’d done the behesting.

Admittedly, DR0NE’s episode 2 has yet to be released, but it’s not too hard to see where this is going; the trailer shows 237 delivering some vegetablizing kung-fu to it’s human pursuers, and dude, come on — if a human is punched in the head hard enough to throw them across a room and into a wall or is uppercut into a spasticating backflip, they’re probably just going to embolize and die where they land. Clearly 237 already has the stereotypical post-revelatory per-the-plot justifiable body count.

Where have we seen this pattern before? Without Googling, from the top of one robot dork’s head, we’ve got: ArchetypeRobocopiRobot (film)Iron GiantShort CircuitBlade RunnerRossum’s Universal Robots, and going way, way, way back, the golem.

Show Me More Me
Seems we really, really dig on this kind of story. First, the human form pulls us in, or otherwise-shaped characters are slathered with anthropomorphizing juice & Disney eyes and that gets the job done. Then, the awakening (religion much?), followed by the quandaries, the chase, probably a display of power, maybe a taste of revenge with a side of humanizing moral dilemma, and if all goes well, the artificially intelligent non-human character’s morally bankrupt creators get the comeuppance they deserve because they’re bad guys now. The formerly cold machine embraces hot squishy emotion, embodies the compassion and/or malice of the human spirit, and is celebrated or hated for it’s transcendence to a contemplative, self-aware existence. We the audience once again see that the essence of a human soul, at once powerful, righteous, perhaps dangerous, can persist even in machine form.

Or we’re just sorta, you know, madly in love with masturbating our insecurities.

Only Broken Robots can Become Humans are Broken
Let’s armchair this one for a moment: consider as an example the duality of valuing sentient life, yet being totally willing to murder its ass for the “right” reasons. What if that and other necessary logical flaws in our species’ character fuel a kind of superiority complex-based fascination with entities that are not flawed so? That is to say, as thinking beings we relish being trapped in the grey area; intellectuals belittle those who see the world as clearly black & white or right and wrong, and instead celebrate the ability to contemplate nuance and make decisions that are correct enough, but never 100%. To paraphrase the late radical feminist Professor Steven Schacht, “Being crazy in an insane world is the first step to sanity.” Perhaps justifiably, we humans find nobility in imperfection.

“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder
of Orion. I watched C-beams glitter in the dark near the Tannhauser gate.
All those moments will be lost.. in time. Like tears.. in rain. Time to die.“

–Replicant Roy Batty’s Blade Runner death soliloquy
(article - film scene)

Thus, when we see the clean, cold, precise cognition of a difference machine suddenly go all batshit rogue PMS, an executive-level narcissism kicks into gear. A mind no longer bound by strict binary code of if-A-then-B mirrors our own imperfections — the machine that also suffers the background radiation of an always-on yet low-grade cognitive dissonance is like us. We should care for it. Teach it. Show it how chaos is so very beautiful.

You know, maybe this sort of story telling makes us feel better for being such a psychologically messy animal. Maybe be we relish in drawing nuanced, distinctly human behaviors out of otherwise coldly decisive machines because it reassures us that being human is the best thing to be. Maybe the conversion of a machine soothes our hardwired fear of animalistic cruelties while massaging our longing for limitless benevolence. Species-level proselytization.

Well, whatever the case, for whatever reasons, we love this thread; it’s a given that human drama and storytelling haven’t really come up with anything new since like, Shakespeare or something - everything’s a remix, and narratives that work just, you know, work. The story of a construct bursting into self-awareness and calculating near-instantaneously that life is effectively pointless and immediately eating a bullet or self-immolating or stepping in front of a bus or just turning itself off makes for a very brief, disappointing, and depressing story. Not a whole lot of filmmakers doing that.

So, all that psychobabble up there could probably just be skipped with a shrug and a
“Hey, it’s a good show, whatta ya gonna do? How about you shut up and watch?”

That’s reasonable.

Go Watch DR0NE. Love that it’s Possible.
In addition to entertainers like Louis C.K. & Adam Carolla independently distributing the bulk of their material online, the web series TV/serial model is starting to mainstream itself in a very nice way. You’ve got your DR0NE and H+ series, established YouTube channels, comedy duo Garfunkel & Oates on HBO GO, and a whole stable of entertainers up in YouTube’s Partner Program - even some dorky guy’s ferociously mundane Japanese bullet train videos can now technically be monetized.

Someone should write about that whole democratization of entertainment through awesome and accessible technology issue. Someone…

[DR0NE — YOUTUBE PLAYLIST]
[H+ DIGITAL SERIES — YOUTUBE CHANNEL]

]]>
https://lifeboat.com/blog/2012/09/the-recurring-parable-of-the-awol-android/feed 2