I am not working in AI so I only know what I read here or on other sites etc. There seems to be a lot of buzz for AI and ML. But where actually are these techs succeeding currently? I feel like there is supposed to be a revolution going on everywhere but anywhere I look, it's just plans and press releases...
One interesting facets of the hype cycle is that it never focuses on what IS being done, because it's boring. Plus, tech improvements like this tend to be pretty operational in nature, so you wouldn't notice it without looking.
Nonetheless, "AI" applications are pervasive:
Auto - improved robotics, adaptive cruise control
Finance - High Frequency Trading, Credit Risk Modeling (i.e. your Credit Score)
Health Care - Health insurance risk estimation, Predictive staffing
Government - Predictive policing, recidivism risk, benefits decisions
Retail - improved customer targeting, inventory management
etc etc etc - name an industry, I'll give you 3 examples.
The issue isn't that it's not there; the issue is that it's BORING. And nobody gives a press release saying "we saved 0.4% of COGS from improved inventory demand forecasts," even if that represents $10M, because nobody cares.
But boring doesn't mean it's not a bazillion dollar opportunity for a lot of companies.
Siri, Google Assistant, speech detection, speech generation, textual photo library search, similar data augmentations for web search, Google Translate, recommendation algorithms, phone cameras, server cooling optimization, phone touch screens touch detection, video game upscaling, noise reduction in web calls, file prefetching, Google Maps, OCR, etc.
AI has already won, most people just don't realize it.
Won what? Is there a competition? Humans still have jobs. Humans are still politicians, judges, CEOs, generals. They even still play chess!
All those successful forms of AI are narrow, not the AGI of science fiction (like Data, Skynet, HAL) or Ray Kurzweil predictions. AI is a tool humans use to extend human capabilities. It always has been. Maybe someday it will be something more.
Yes, thank you. There has already been a revolution over the past 5 years or so and many things that had been too audacious for science fiction became every day products. I think the ML revolution hit me personally about 5 years ago as I was able to get perfect speech recognition from my phone on a loud, crowded subway platform as a train was pulling in. I would have never thought that possible. I would have been skeptical if star trek had shown it.
From my position this narrowing of the term AI to refer only to ‘real intelligence’ has always seemed like little more than an attempt to control the narrative against an astonishingly successful trend of connectionist architectures doing incredible things. Nobody complained when Pac-Man's ghosts got called AI, but now it's political.
You got a point there with the neural networks. Though I never considered the ghosts from Pac-Man to have an AI. And why are you framing this as a "control the narrative"/political argument?
Predicting quantum mechanics energies of molecules using neural networks actually works, and can be used to speed up geometry optimization during drug discovery.
well, this is not predicting "quantum mechanics energies", it's just parametrizing the molecular bond interaction potential with a neural network instead of an analytic function (such as e.g. a Lennard-Jones potential).
It's nice, but not really quantum-mechanics level (which is maybe HF, DFT or coupled cluster), which takes a lot more cycles (but also allows to optimize geometries without knowing wether a bond exists)
These neural network models do not need to know whether a bond exists - in fact, they have no concept of bond topology. They are designed to be a drop-in replacement for DFT in terms of energies and forces. The only inputs are XYZ coordinates and chemical element labels for the nuclei (and, in the near future, net charge of the system).
In physics it just recently became mainstream to try experimenting with/incorporating ML into thesis projects. Most stuff ive seen it used for is signal processing related. An example might be particle track reconstruction in a time projection chamber with ML instead of a hough transform. I think it's inevitable that these methods will grow in application, but the two biggest problems right now in my opinion are reproducability and quantification of uncertainties. It's much easier to believe someone's stated uncertainties when you can see the analytic functions they were propagated through. There are ways to kindof work around this, but in my mind those two points are the main things holding back ML from broader applications in science. The article talks about ML tools closer to proof assistants / tools for experimentally driven mathematics. Less of a problem in that domain since the ML model only need make an interesting conjecture which can then be examined the traditional way.
I agree with you on UQ. For example, I have seen a couple of talks in my field of neutron scattering, where people are using denoising autoencooders to remove artifacts and fit data. It's also clear that no one has any idea how this effects the uncertainties on parameters for models that are fit on the denoised data, much less what happens if the models are not appropriate for the data.
I think reproducibility can be tackled--at least some journals (shameless plug--I'm a lowly associate editor on science advance) are strongly encouraging people include data/code with publications. I have reviewed papers in Nature Comput. Materials where people have included data/jupyter notebooks (not perfect, but a very good start). It would be great if funding agencies started adding more teeth to requirements on data sharing. However, many more groups are putting their code on Github.
It becomes easier to take the progress seriously and understand it when you drop the "intelligence"-style labels which misleads people into thinking something is there that isn't.
Machine "learning" isn't ideal either, but is at least a bit more limited in the scope of what it conveys.
Once you leave the hype baggage behind, it's more easy to see the significant progress that these tools - in concert with increased power and data resources - have made in many different areas over the last few years, some of them listed elsewhere in the answers to your question.
I think this is down to the loudest, most ambitious projects ("AGI! Fully autonomous vehicles!") getting a lot of press. The reality is, production ML is basically everywhere already:
- Basically every piece of software that makes recommendations (Netflix, Google, Facebook, YouTube, Instagram, TikTok, etc.) uses machine learning.
- Anything that makes time series forecasts (Uber/Maps ETA prediction, Walmart's 2 hour delivery, etc.) uses machine learning.
- All the most popular speech-to-text assistants (Alexa, Google Assistant, Siri) use machine learning.
- Smartphone cameras use machine learning to enhance picture quality.
- A lot of very highly-used security monitoring solutions (Stripe's fraud detection, CloudFlare's bot detection, etc.) rely on machine learning.
- A surprising number of physical commerce-type situations rely on machine learning (autonomous filling stations, for example, are pretty common in the trucking industry).
- A lot of smart image manipulation tools (Instagram/SnapChat filters, etc.) rely on deep learning.
- Email clients, particularly Gmail, use machine learning for spam filtering and for things like Smart Compose.
- Some infrastructure products use machine learning, as in the case of EC2's predictive autoscaling.
And those are just hyper-scale examples. There's a ton earlier-stage-but-still-in-production projects doing awesome things with ML:
- Wildlife Protection Solutions legitimately doubled their detection rate of poachers in nature preserves with ML.
- PostEra, Benevolent AI, and a bunch of other ML-based medicine platforms (medicinal chemistry, drug discovery, etc.) have already had exciting results.
- There are a bunch of startups building industry-specific APIs out of models, like Glisten.ai, that are already profitable.
- A number of computer vision products have been brought to market in the healthcare space—Ezra.ai screens full-body MRIs for cancers, SkinVision detects melanomas.
- ML-powered chatbots are a pretty huge market. Olivia (a financial assistant) has something like 500k users. AdmitHub has successfully lowered summer melt (the attrition of college-intending students between spring and fall) at a bunch of colleges. Rasa is an entire platform that helps startups build NLP-powered bots.
Sorry that went a bit long, but basically, the production ML space is incredibly deep, and spans most industries/company sizes. Unfortunately, press coverage of ML tends to treat it as if it's this mystic, sci-fi future technology, and as a result, this "Show me AGI or it's snake oil" mindset naturally emerges.
Any type of preditive analysis in high dimensional data (medical imaging, surveillance, remote sensing, machine translation, speech recognition/synthesis, music information retrieval). Other important work being done on causality, AI ethics/safety, and explainability (XAI) but little industry impact yet.
Well.. for starters with machine learning, we can automatically make anyone naked by just applying a few algorithms and an ML model to a picture of a clothed person. Depending on who you are talking to that’s quite a break through for some people.
I always assumed most audio agents (e.g. Siri) use some form of AI and/or ML. And that Google search results probably has some somewhere in their pipeline. But don't know for sure.
just being jaded , all the money is in shareholders and investors pockets. Because once you have a disruptive AI-based startup, you have become englightened and now are on the course to change the future of human race with your amazing AI. (AI? i meant series of if statements and linear regression).
AI means you have managed to throw a shitton of processing power at a problem and P-hack the shit out your results so it shows you made a significant improvement.