Toggle light / dark theme

In recent years, electronics engineers have been trying to develop new brain-inspired hardware that can run artificial intelligence (AI) models more efficiently. While most existing hardware is specialized in either sensing, processing or storing data, some teams have been exploring the possibility of combining these three functionalities in a single device.

Researchers at Xi’an Jiaotong University, the University of Hong Kong and Xi’an University of Science and Technology introduced a new organic transistor that can act as a sensor and processor. This transistor, introduced in a paper published in Nature Electronics, is based on a vertical traverse architecture and a crystalline-amorphous channel that can be selectively doped by ions, allowing it to switch between two reconfigurable modes.

“Conventional (AI) hardware uses separate systems for data sensing, processing, and ,” Prof. Wei Ma and Prof. Zhongrui Wang, two of the researchers who carried out the study, told Tech Xplore.

When you’re putting together a computer workstation, what would you say is the cleanest setup? Wireless mouse and keyboard? Super-discrete cable management? How about no visible keeb, no visible mouse, and no obvious display?

That’s what [Basically Homeless] was going for. Utilizing a Flexispot E7 electronically raisable standing desk, an ASUS laptop, and some other off-the-shelf parts, this project is taking the idea of decluttering to the extreme, with no visible peripherals and no visible wires.

There was clearly a lot of learning and much painful experimentation involved, and the guy kind of glazed over how a keyboard was embedded in the desk surface. By forming a thin layer of resin in-plane with the desk surface, and mounting the keyboard just below, followed by lots of careful fettling of the openings meant the keys could be depressed. By not standing proud of the surface, the keys were practically invisible when painted. After all, you need that tactile feedback, and a projection keeb just isn’t right.

The mammalian retina is a complex system consisting out of cones (for color) and rods (for peripheral monochrome) that provide the raw image data which is then processed into successive layers of neurons before this preprocessed data is sent via the optical nerve to the brain’s visual cortex. In order to emulate this system as closely as possible, researchers at Penn State University have created a system that uses perovskite (methylammonium lead bromide, MAPbX3) RGB photodetectors and a neuromorphic processing algorithm that performs similar processing as the biological retina.

Panchromatic imaging is defined as being ‘sensitive to light of all colors in the visible spectrum’, which in imaging means enhancing the monochromatic (e.g. RGB) channels using panchromatic (intensity, not frequency) data. For the retina this means that the incoming light is not merely used to determine the separate colors, but also the intensity, which is what underlies the wide dynamic range of the Mark I eyeball. In this experiment, layers of these MAPbX3 (X being Cl, Br, I or combination thereof) perovskites formed stacked RGB sensors.

The output of these sensor layers was then processed in a pretrained convolutional neural network, to generate the final, panchromatic image which could then be used for a wide range of purposes. Some applications noted by the researchers include new types of digital cameras, as well as artificial retinas, limited mostly by how well the perovskite layers scale in resolution, and their longevity, which is a long-standing issue with perovskites. Another possibility raised is that of powering at least part of the system using the energy collected by the perovskite layers, akin to proposed perovskite-based solar panels.

Do intelligent people think faster? Researchers at the BIH and Charité—Universitätsmedizin Berlin, together with a colleague from Barcelona, made the surprising finding that participants with higher intelligence scores were only quicker when tackling simple tasks, while they took longer to solve difficult problems than subjects with lower IQ scores.

In personalized brain simulations of the 650 participants, the researchers could determine that brains with reduced synchrony between literally “jump to conclusions” when making decisions, rather than waiting until upstream brain regions could complete the processing steps needed to solve the problem.

In fact, the brain models for higher score participants also needed more time to solve challenging tasks but made fewer errors. The scientists have now published their findings in the journal Nature Communications.

Scientists have developed an advanced technique for 3D printing that is set to revolutionize the manufacturing industry.

The group, led by Dr. Jose Marques-Hueso from the Institute of Sensors, Signals & Systems at Heriot-Watt University in Edinburgh, has created a new method of 3D printing that uses near-infrared (NIR) light to create complex structures containing multiple materials and colors.

They achieved this by modifying a well-established 3D known as stereolithography to push the boundaries of multi-material integration. A conventional 3D printer would normally apply a blue or UV laser to a that is then selectively solidified, layer by layer, to build a desired object. But a major drawback of this approach has been the limitations in intermixing materials.

AI is everywhere. Its use is being debated in headlines, on social media and around dinner tables. To some, the rate of AI acceleration is concerning, with many technology leaders calling for a six-month pause in the training of new systems to better understand the impact such tools are having. To others, AI is seen as the cornerstone of the fourth industrial revolution, the latest disruptive technology opening up possibilities for new ways of learning, working and living that we have never experienced before.

Yet, disruptive technologies are nothing new. They have been changing the way we live and work for decades. And these changes have not been without consequences, particularly in the form of economic dislocation and social upheaval. Automation in manufacturing has streamlined mass production and driven down costs; Ecommerce platforms have reshaped the way we shop and do business; even online education has found new ways to provide flexible and affordable ways of learning, delivering opportunities to millions across the globe that simply were not available before.

Presently, much of the discussion around the impact of AI is based on conjecture. However, it is widely agreed that it will have a major impact on jobs and even has the potential to call into question the very fundamentals of what work is. What is not understood is how AI will play out across society in the longer term. Will it, like previous technological revolutions, deliver short-term disruptions followed by long-term benefits, or will it be the catalyst for new ways of learning and upskilling and help reduce the widening digital divide?

You might even go so far as to think of the term brainwashing in relation to propaganda spread during the First and Second World Wars, in order to influence vast amounts of people.

But what exactly is brainwashing and should we confine it to the past?

The term brainwashing was first coined in the 1950’s during the Korean War. It was used to explain how totalitarian regimes were able to completely indoctrinate American soldiers through a process of torture and propaganda.

Guy, a recognized industry thought leader, is the president of SmartSense, IoT solutions for the enterprise.

It’s no secret that healthcare systems exist at the intersection of financial risk and operational risk. Amid the market volatility of our current socioeconomic environment, the pressure is on hospitals, clinics and blood banks to maintain healthy profit margins that enable them to keep pace with rising demand for clinical care and prescription medications. The rate of U.S. spending on prescriptions is increasing at a rapid clip, and considering physician-administered drugs provide hospitals with high gross profits, investing in pharmaceutical services is a logical pathway to profitability.

However, severe pharmaceutical compliance regulations related to safety and efficacy—CDC, VFC, FDA, AABB and BOP—create a myriad of risk management issues for healthcare organizations to juggle. In the U.S., adverse drug effects are one of the most common medical errors. All it takes is one mismanaged medication to put a patient’s health at significant risk. And on a global scale, widespread vaccine hesitancy rooted in public skepticism has served as a critical roadblock to mitigating the spread of severe infectious diseases like Covid-19.