Menu

Blog

Archive for the ‘information science’ category: Page 203

Oct 12, 2020

AI helps produce world’s largest 3D map of the universe

Posted by in categories: information science, robotics/AI, space

Scientists at the University of Hawaii’s Mānoa Institute for Astronomy (IfA) have used AI to produce the world’s largest 3D catalog of stars, galaxies, and quasars.

The team developed the map using an optical survey of three-quarters of the sky produced by the Pan-STARRS observatory on Haleakalā, Maui.

They trained an algorithm to identify celestial objects in the survey by feeding it spectroscopic measurements that provide definitive object classifications and distances.

Oct 12, 2020

The Coming Internet: Secure, Decentralized and Immersive

Posted by in categories: computing, disruptive technology, electronics, information science, internet, open access, supercomputing

The blockchain revolution, online gaming and virtual reality are powerful new technologies that promise to change our online experience. After summarizing advances in these hot technologies, we use the collective intelligence of our TechCast Experts to forecast the coming Internet that is likely to emerge from their application.

Here’s what learned:

Security May Arrive About 2027 We found a sharp division of opinion, with roughly half of our experts thinking there is little or no chance that the Internet would become secure — and the other half thinks there is about a 60% probability that blockchain and quantum cryptography will solve the problem at about 2027. After noting the success of Gilder’s previous forecasts, we tend to accept those who agree with Gilder.

Decentralization Likely About 2028–2030 We find some consensus around a 60% Probability and Most Likely Year About 2028–2030. The critical technologies are thought to focus on blockchain, but quantum, AI, biometrics and the Internet of things (IoT) also thought to offer localizing capabilities.

Continue reading “The Coming Internet: Secure, Decentralized and Immersive” »

Oct 11, 2020

DIA awards nearly $800 million in work to major defense primes

Posted by in categories: computing, information science

The U.S. Defense Intelligence Agency awarded nearly $800 million in contacts to two major defense contractors to improve data storage and network modernization.

The DIA, a military intelligence agency, chose Northrop Grumman to deliver its Transforming All-Source Analysis with Location-Based Object Services (TALOS) program, which focuses on building new big data systems. The contract is worth $690 million. A spokesperson for Northrop Grumman declined to provide the performance period.


The DIA made two awards to Northrop Grumman and GDIT.

Continue reading “DIA awards nearly $800 million in work to major defense primes” »

Oct 10, 2020

New quantum computing algorithm skips past time limits imposed by decoherence

Posted by in categories: information science, quantum physics, supercomputing

This could be important!


A new algorithm that fast forwards simulations could bring greater use ability to current and near-term quantum computers, opening the way for applications to run past strict time limits that hamper many quantum calculations.

“Quantum computers have a limited time to perform calculations before their useful quantum nature, which we call coherence, breaks down,” said Andrew Sornborger of the Computer, Computational, and Statistical Sciences division at Los Alamos National Laboratory, and senior author on a paper announcing the research. “With a we have developed and tested, we will be able to fast forward quantum simulations to solve problems that were previously out of reach.”

Continue reading “New quantum computing algorithm skips past time limits imposed by decoherence” »

Oct 9, 2020

CLEANN: A framework to shield embedded neural networks from online Trojan attacks

Posted by in categories: cybercrime/malcode, information science, robotics/AI

With artificial intelligence (AI) tools and machine learning algorithms now making their way into a wide variety of settings, assessing their security and ensuring that they are protected against cyberattacks is of utmost importance. As most AI algorithms and models are trained on large online datasets and third-party databases, they are vulnerable to a variety of attacks, including neural Trojan attacks.

A neural Trojan attack occurs when an attacker inserts what is known as a hidden Trojan trigger or backdoor inside an AI model during its training. This trigger allows the attacker to hijack the model’s prediction at a later stage, causing it to classify data incorrectly. Detecting these attacks and mitigating their impact can be very challenging, as a targeted model typically performs well and in alignment with a developer’s expectations until the Trojan backdoor is activated.

Researchers at University of California, San Diego have recently created CLEANN, an end-to-end framework designed to protect embedded from Trojan attacks. This framework, presented in a paper pre-published on arXiv and set to be presented at the 2020 IEEE/ACM International Conference on Computer-Aided Design, was found to perform better than previously developed Trojan shields and detection methods.

Oct 9, 2020

Bringing the promise of quantum computing to nuclear physics

Posted by in categories: computing, information science, particle physics, quantum physics

Quantum mechanics, the physics of atoms and subatomic particles, can be strange, especially compared to the everyday physics of Isaac Newton’s falling apples. But this unusual science is enabling researchers to develop new ideas and tools, including quantum computers, that can help demystify the quantum realm and solve complex everyday problems.

That’s the goal behind a new U.S. Department of Energy Office of Science (DOE-SC) grant, awarded to Michigan State University (MSU) researchers, led by physicists at Facility for Rare Isotope Beams (FRIB). Working with Los Alamos National Laboratory, the team is developing algorithms – essentially programming instructions – for quantum computers to help these machines address problems that are difficult for conventional computers. For example, problems like explaining the fundamental quantum science that keeps an atomic nucleus from falling apart.

The $750,000 award, provided by the Office of Nuclear Physics within DOE-SC, is the latest in a growing list of grants supporting MSU researchers developing new quantum theories and technology.

Oct 9, 2020

What Brain-Computer Interfaces Could Mean for the Future of Work

Posted by in categories: biotech/medical, computing, information science, neuroscience, wearables

Imagine if your manager could know whether you actually paid attention in your last Zoom meeting. Or, imagine if you could prepare your next presentation using only your thoughts. These scenarios might soon become a reality thanks to the development of brain-computer interfaces (BCIs).

To put it in the simplest terms, think of a BCI as a bridge between your brain and an external device. As of today, we mostly rely on electroencephalography (EEG) — a collection of methods for monitoring the electrical activity of the brain — to do this. But, that’s changing. By leveraging multiple sensors and complex algorithms, it’s now becoming possible to analyze brain signals and extract relevant brain patterns. Brain activity can then be recorded by a non-invasive device — no surgical intervention needed. In fact, the majority of existing and mainstream BCIs are non-invasive, such as wearable headbands and earbuds.

The development of BCI technology was initially focused on helping paralyzed people control assistive devices using their thoughts. But new use cases are being identified all the time. For example, BCIs can now be used as a neurofeedback training tool to improve cognitive performance. I expect to see a growing number of professionals leveraging BCI tools to improve their performance at work. For example, your BCI could detect that your attention level is too low compared with the importance of a given meeting or task and trigger an alert. It could also adapt the lighting of your office based on how stressed you are, or prevent you from using your company car if drowsiness is detected.

Oct 5, 2020

SkyWatch and Picterra combine imagery access with AI tools

Posted by in categories: business, information science, robotics/AI, satellites

SkyWatch Space Applications, the Canadian startup whose EarthCache platform helps software developers embed geospatial data and imagery in applications, announced a partnership Oct. 5 with Picterra, a Swiss startup with a self-service platform to help customers autonomously extract information from aerial and satellite imagery.

“One of the things that has been very difficult to achieve is this ability to easily and affordably access satellite data in a way that is fast but also in a way in which you can derive the insights you need for your particular business,” James Slifierz, SkyWatch CEO told SpaceNews. “What if you can merge both the accessibility of this data with an ease of developing and applying intelligence to the data so that any company in the world could have the tools to derive insights?”

SkyWatch’s EarthCache platform is designed to ease access to aerial and satellite imagery. However, SkyWatch doesn’t provide data analysis.

Continue reading “SkyWatch and Picterra combine imagery access with AI tools” »

Oct 1, 2020

New Website Lets You Help NASA Find Alien Worlds

Posted by in categories: information science, robotics/AI, space

NASA just launched a new citizen science project — it wants the public’s help to find and identify brand new exoplanets.


Human Touch

This is the sort of work that technically could be automated with an algorithm trained to spot new worlds, Space.com reports. But it turns out that in this case, there’s no substitute for human judgment.

Continue reading “New Website Lets You Help NASA Find Alien Worlds” »

Sep 30, 2020

This AI Generates Photos Using Only Text Captions as a Guide

Posted by in categories: information science, robotics/AI

Researchers at the Allen Institute for Artificial Intelligence (AI2) have created a machine learning algorithm that can produce images using only text captions as its guide. The results are somewhat terrifying… but if you can look past the nightmare fuel, this creation represents an important step forward in the study of AI and imaging.

Unlike some of the genuinely mind-blowing machine learning algorithms we’ve shared in the past—see here, here, and here —this creation is more of a proof-of-concept experiment. The idea was to take a well-established computer vision model that can caption photos based on what it “sees” in the image, and reverse it: producing an AI that can generate images from captions, instead of the other way around.

This is a fascinating area of study and, as MIT Technology Review points out, it shows in real terms how limited these computer vision algorithms really are. While even a small child can do both of these things readily—describe an image in words, or conjure a mental picture of an image based on those words—when the Allen Institute researchers tried to generate a photo from a text caption using a model called LXMERT, it generated nonsense in return.