I recently had a child. Watching her learn inspires a sense of amazement. It starts slow with basic grasping, then shaking, then hitting, then tracking, then pointing. You really see how human's learn from first principles. One difference I perceive between a child an a neural network is that it appears a child does not start from a blank slate. The brain of a baby seems to come hard-wired with some basic instinctual functionality like grabbing and object tracking.
> One difference I perceive between a child an a neural network is that it appears a child does not start from a blank slate. The brain of a baby seems to come hard-wired with some basic instinctual functionality like grabbing and object tracking.
I think that's likely, but also babies have ~9 months of development, at least some of that time (3-6 months) they're moving around/ have a brain.
Babies have Millions of years of development. I still can't believe we start with Babies as base point to start learning. When the babies are born it is almost on auto-pilot (Nature > Nurture)
Mine is now 1.5 and the amazement continues. Hearing them learn to speak is also at a whole different level than any algorithm or adult does.
They use some rudimentary logic even at that age. There is a thing that's in English called an "egg cup" (according to Google Translate :-)), there is one word for that thing in Dutch: "eierdop".
My daughter, just having heard the word "egg" for the first time that day looked at an egg cup and decided that it was "ei-bakje" in Dutch, which roughly translates to "small container for egg". Only took maybe 2 seconds.
I think that ability to find relationships between things and extrapolate very rapidly from them is what makes human kids so fast at learning. Which may have something to do with the fact that human memory is very associative, almost like what near 100% data locality would be in CPU design with many many cores.
Thanks so much for sharing. Refreshes my own experience. I was so clear what I wanted to do with my life - so many goals and aspirations. Then I remember holding my first-borne in one hand. Less than an hour old she peered up to me almost inquisitively as if to say "I hope you got this?" At that moment I realized what my life was about. Can barely remember those goals and aspirations now. As they say in Spanish, my first borne now has 42 years. Brought 6 kids and 5 grandchildren into this world.
As I age I am reminded by the same process you talk about but in reverse now. Watch the Robin Williams movie "Awakenings" [1]. Really portrays it. We come into this world absolutely unable to do much of anything. We fade in. Then ... trust me ... we fade out. But in between, if we're really lucky we get to shine.
Dutch is weird. I was interested to see whether 'dop' or 'erdop' meant something similar to cup, and the question is really just one of spacing.
I discovered actually 'dop' means something like 'shell'. So APPARENTLY eierdop all as one word means egg cup, but eier dop as two separate words means egg shell?
Dutch has less spaces. Generally if it's one object, it's one word. This causes a typical spelling mistake native Dutch speakers make in English: we combine words into one that should be two words.
In Dutch "eier dop" does not exist. Could be "ei dop" and then you're taking about two separate objects, an egg and a cap (as in bottle cap).
And that's also immediately an example, I tried to type that as bottlecap and my phone corrected it with an extra space ;-)
Object tracking is actually a very complex behavior, requiring significant computational power. The eye has little to do with this, except for providing the raw sensory information.
Basically the brain has to interpret each single image, then stitch together multiple images, understand that they are related, recognize objects within them, then start interpreting differences between them as movement etc.
But the serious part of my point is that the eye didn't evolve in a vacuum. It isn't "just a sensor" in an evolutionary sense because it evolved along with the brain and other senses as part of the holistic vision system. It's not like the brain is born with a blank slate and the eye as a sensor. It comes prebuilt with firmware and drivers, so to speak. That's then catalyzed by experience.
I think this is part of the problem in computer vision. Everyone just takes cameras and stitches them into a computer, hoping to be able to figure it out. It basically doesn't work. I think there needs to be embodied vision. I'd love to see any research that explores this. I have an intuitive suspicion that our vision system and processing is tightly bound to our body, in particular our neck and hands and legs. (This is unfounded, since I'm not a biologist or cognitive scientist, and is just my naive hunch. I'd be surprised if it's far off though.)
I have some recollection of a study I once came across (as a warning, it was somewhat disturbing) about the cognitive development of dogs, where the researchers took a number of puppies and somehow arranged things so that the puppies wouldn't see any changes in their vision while moving. Something like they were kept in a box with their heads secured facing one side of the box, with a treadmill that would occasionally run so that their muscles still developed. And what they found was that the puppies were rather severely non-functional when they were finally removed from their boxes. I think it was along the lines of functional blindness and a lack of normal canine volition to move around and do things, and the puppies weren't able to go on and develop normally afterwards. The conclusion was something about the importance of the visual field changing in response to their bodies trying to move around. Frankly it seems like a "no shit, Sherlock, was such an unpleasant experiment really necessary?" kind of conclusion, but it seems in line with your intuition about the crucial binding between vision and the motion of various limbs (and implicitly, that the environment actually produces different -- and to our visual networks, useful in some particular ways -- inputs as we move our limbs and neck around).
By the way, if anyone happens to know of the study I'm thinking of, I'd love to have a link. The last time I tried searching on Google / DDG, the results were drowned by unhelpful articles for dog articles.
Related to your hunch, I'm also speculating, but I'd think at the very least our eye muscles are an integral part of our vision system, it's not just the retina itself.
In animals some things are definitely hardwired. I had a pet rabbit and made the mistake of watching a nature documentary about snakes when she was in the room. The mere sound of a rattle snake on TV caused the rabbit to stamp a few times, spray urine, and run away to her hutch (skidding on the floor as the left).
I'm sure humans have similar instincts. For example, the tendency of young kids to avoid plants until they are shown that they are edible - possibly leading to the general preference of artificial foods over anything that might be found in a garden. [1]
I've also noticed that toddlers don't gravitate towards knives the same way they do everything else. I certainly wouldn't rely on this behaviour, but I've noticed it in my own children and others.
Observing how children need a fairly limited amount of "learning examples" or "training sets" to learn that a cat is a cat, for instance, remains one of the most solid argument convincing me that AI fundamentally is not reproducing what actually happens in human brains.
actually it's closer than you might think. If you consider the eye and visual cortex to be the first layers of a deep learning network, we are actually getting a relatively similar system. However it has limitations further down the line in information representation stuff. The eye is genetically programmed to detect lines, hard edges, colours, and more complex patterns. This is exactly the same as the first layers of a neural net.
However we then humanly assemble stuff like cat = eyes+fur+pointy+cute+small+meowing. A neural net has no way yet to represent information at that level, but we are getting closer there than going beyond dumb numbers. A neural net has implicitly the concept, but is not yet doing the 3d representation inside itself, which makes it harder to define something like 'ears' and 'tail'
I wonder how much the child's sensory reward system leads to memories and associations. How can artificial neural networks ever "feel" good about anything, and use that to learn?
A child learns that petting a cat is better than hitting the cat. Petting feels better, and the cat will stick around for longer. But how does AI learn the same thing without any pleasurable sense the petting provides?
Also will we see robots throwing tantrums? Children learn this tactic early, and it works to achieve desired outcomes. Later they progress to crafty manipulation tactics, such as carefully executed tone when asking for treats etc.
We have a cost function that could loosely be correlated to a pain/sadness function. Perhaps it makes sense to also include a pleasure function to give them some kind of autonomous direction. Perhaps it is this part of AI that will ultimately lead to trouble. Is pleasure the inverse of pain?
I think the inverse of pain is just comfortable or satisfied. Like a cat sitting on the mat. If the cat finds an electric heated blanket nearby and moves from mat to warm blanket, we are now in pleasure zone. Cat now has more than it needs.
AI could use a pleasure function, but I'm not sure how it could justify seeking or getting more than it needs. The "pleasure" would need to be linked to a measurable benefit for its hardware or systems, and energy would need to be used to achieve that. I guess it could happen, but then we'll need greed functions to cap the pursuit of pleasure!
> Also will we see robots throwing tantrums? Children learn this tactic early, and it works to achieve desired outcomes.
Sigh. Tantrum isn't a “learnt tactic” (it's the child's inability to control his frustration / other emotion) and it doesn't work to achieve desired outcome which is why they stop when the child is mature enough to have better emotional control (unless you're trying to raise a tyrannical child, in which case giving your child what he wants when he throws a tantrum is a good way to succeed…)
I think yours was a good question in a way (it may make sense in some contexts, and with undefined translation): it may be part of the "paperclip maximizer" idea of the strategies a non-commonsensical agent may adopt to achieve its goals.
In automated chip layout, for example, already one can be proposed odd NOP circuitry that reveals to be just relevant to timings - all tricks are possible.
"Sighs" could presumably be the reversal manouvers in decision trees walking.
They all do it, apparently it's because their motor skills initially aren't developed enough to use their fingers to learn texture. Putting things in your mouth provides more data so to speak.
Mine also really liked eating sand for a while :-)
I believe it's much simplier: since eating is primary for any living species, how would you differentiate food from non-food while understanding the world (and still need to eat)?
The problem with that is that if it isn't food they usually don't try to eat it. They just put the object in their mouths and kind of feel it around. There's no attempt to consume it.
> Typically, AI models start with a blank slate and are trained on data with many different examples, from which the model constructs knowledge. But research on infants suggests this is not what babies do. Instead of building knowledge from scratch, infants start with some principled expectations about objects [...] The exciting finding by Piloto and colleagues is that a deep-learning AI system modelled on what babies do, outperforms a system that begins with a blank slate and tries to learn based on experience alone
The interesting thing is that the author of the paper you linked is currently reviewing the paper, or was at the time of writing:
>> New research by Luis Piloto and colleagues at Princeton University – which I’m reviewing for an article in Nature Human Behaviour – takes a step towards filling this gap.
I don't reckon I've seen this kind of article before: "here's some new research I'm reviewing and I think it's rad". That just sounds so... dodgy. You're supposed to maintain at least the facace of being impartial and having some sort of integrity when making reviews, even when reviewing a DeepMind paper for Nature.
The similarity you see between this DeepMind project, the DARPA program, and research in Tenenbaum's lab is not incidental: there's a steady stream of crosstalk and cross-training between machine learning researchers who engineering artificial intelligences and cognitive scientists who reverse-engineer human intelligences. (Note, for example, that Peter Battaglia, one of the co-authors of this DeepMind project, was a postdoc with Tenenbaum.)
That scans with observations from recent text-to-image works where there often seems to be an insight or two either left without citation or citing an unpublished work that they used to avoid testing an (ultimately) incorrect hypothesis.
I have seen some suggest that this is basically Google “allowing” the competition to catch up just to beat them a few weeks later but generally it just seems like they’re kind of all chatting with each other in the background instead.
I feel like there is an inferential leap implied, to greatly simplify, from "A does X and B does X" to "A and B must operate relevantly similarly." For example, walking and flying are both modes of transportation, but you can't really learn anything interesting about one from studying the other
Once you understand that walking gets you from where you are to where you want to be, you start to define the characteristics of motion. Then, seeing other forms of conveyance that are faster underpin the concept of efficiency.
Walking and flying may not have a lot in common to you and I, but to a thing learning to crawl, there is a lot to be understood.
Demonstrations almost always are about sufficiency, not necessity.
If I make a machine that walks to the store, I have shown that walking is sufficient to get to the store. I have not ruled out the possibility that you bicycle to the store.
But if there has been a long running passionate debate on whether walking could ever get you to the store, showing sufficiency can settle that.
Underrated comment. This is like a "correlation is not causation" side of "this ML worked so this must be how a brain works" needs to be hit on the head, time and time again.
Maybe we can teach an ML to whack-a-mole until we see ideas pop out which the ML doesn't think are repetitive correlation/causation/model-is-working effects
Even more severely, I think this is pseudoscience -- though a genuinely borderline case "with real value".
All theyre doing is feeding in pixel patterns and object information for specific scenes, and then calling the models ability to detect general correlations in those scenes "solidity", "inertia", and the like.
Any time where patterns in correlations are taken as an explantory model, we have pseudoscience.
Yes, inertia/solidity/etc. is the right answer for why physical objects show those visual correlations. But pixel patterns in some images are NOT models of inertia, solidity, etc.
Just as correlations in pixel patterns in the sky is not a model of gravity. The right model, eg., is F=GMm/r^2 which are variables not even present in the data. So no correlative model of a night sky will ever be a model of gravity.
>For example, walking and flying are both modes of transportation, but you can't really learn anything interesting about one from studying the other
You can figure out newtonian mechanics entirely on the ground and that clearly helps you understand flight. By analogy, getting more understanding about what limits exist in ANNs could plausibly help understand how the brain works (and vice versa).
-- DeepMind AI learns physics by watching videos that don't make sense - An algorithm created by AI firm DeepMind can distinguish between videos in which objects obey the laws of physics and ones where they don't - https://www.newscientist.com/article/2327766-deepmind-ai-lea...
> Luis Piloto at DeepMind and his colleagues have created an AI called Physics Learning through Auto-encoding and Tracking Objects (PLATO) that is designed to understand that the physical world is composed of objects that follow basic physical laws. // The researchers trained PLATO to identify objects and their interactions by using simulated videos of objects moving as we would expect [...] They also gave PLATO data showing exactly which pixels in every frame belonged to each object. To test PLATO’s ability to understand five physical concepts such as persistence..., solidity and unchangingness..., the researchers used another series of simulated videos. Some showed objects obeying the laws of physics, while others depicted nonsensical actions [with the latter, correctly the AI returned wrong predictions, showing an acquired intuition of physics]
From the submitted one:
> [Jeff Clune, Uni British Columbia, Vancouver: ]«[Comparing AI with how human infants learn is] an important research direction. That said, the paper does hand-design much of the prior knowledge that gives these AI models their advantage». // Clune and other researchers are working on approaches in which the program develops its own algorithms for understanding the physical world
Data and analysis alone, without experimentation, don't seem like enough to achieve real intelligence. From its title this article sounded like it would be about progress in learning by doing. Alas, it's not.
Right. "You have to tell yourself stories", as the late Prof. Patrick Winston said (you are intelligent because you can predict the unexperienced). Because you need concept development and critical thinking - an active process.
Not only while some are contented with AI solutions for very specific problems, like the optimal colour for bottlecaps, some are still keeping the ideal of progress towards AGI very present;
but also the idea and practice of finding new tricks that may help the whole field remains there. This research seems to have found a shortcut strategy - a faster ML.
Looking at the photo I don't think the AI is going to realize eating the piece it is about to pick up and it will choke. I would actually like to see more reinforcement learning agents like that, the action space on infant movement is quite small so it's even really about "action space discovery" to some point. Things it discovers are way more interesting, like if food were not on the floor/level and it has to stand to get it, it will eventually get there after N attempts (over time), and then if you introduce another agent if learning to block the other agent will award more food, etc, then it discovers 50/50 and equilibriums (better to eat now than wait). PLATO seems like a step in that direction.
They evaluated their model by producing a test set with videos spliced so as to
compose physically "impossible" sequences (that's the claim anyway - who knows
how "impossible" those sequences are). Then they measured the trained network's
prediction error on the spliced videos and they called that "surprise", implying
that it is somehow similar to the "surprise" shown by human babies when they
observe physically impossible sequences (to clarify, in the psych literature
this "surprise" is measured as a function of the time spent looking at a scene).
Then they measured "accuracy" as the difference in "surprise" between possible
and impossible videos.
There are two problems with this. First, it's the old switcheroo: it just
relabels prediction error as "surprise". Yeah, "surprise" is interpreted, in a
Bayesian setting, as the difference between an expected event and an actually
occuring event but here we're not in Bayesian world. There's just a classifier
failing to classify correctly. By the same token, any classifier's
classification error ever can be interpreted as "surprise", even though it's
just failure to acquire the target concept.
Second, this measure of "surprise" has nothing to do with the measure of
surprise used in experiments with baby humans, where "surprise" is a function of
the time the baby spends looking at a scene. For a baby human it makes sense to
intrepret longer time looking as a double-take: "what the hell just happened"?
But for a neural net? How can a neural net do a double-take? First of all the
time spent "looking" at any image is entirely controlled by the experimenters.
This is just an apples-to-oranges comparison and the result is a lemon.
Third, their "flat" networks trained without hand-crafted object segmentation
show lower "surprise" but since that's just the sum of squared errors, it only
means that the flat networks have learned classifiers that perform better on the
test set than the proposed approach. When "accuracy" is compared, the flat
networks score lower but that should be expected: if "accuracy" is measured as
the difference in classification error, the model that is better at classifying
across all tasks should have lower "accuracy". But that's because it has higher
classification accuracy.
(yeah, that's an off-by-one error, well spotted).
Then, to test whether their system generalises to unseen objects and events,
they tested it on a separate dataset, called ADEPT, and found that it scores
very well. The only problems is that they measured "accuracy" again as the
difference in mean squared errors between "possible" and "impossible" scenes.
All this is measuring is the _lack_ of generalisation from training to test
images.
I'm sorry to be so negative, again, but all I see here is playing with language
and rechristening error as "surprise" and "accuracy". There is nothing that
shows acquisition of "intuitive physics" like the paper keeps claiming.
I wish ML researchers (EDIT: and engineers and journalists) stopped using anthropomorphizing language. This has decades of solid tradition, but that's no excuse. Any comparison of a machine to a human misleads the public. Machines aren't like babies, artificial neural networks aren't like actual neural networks or brains. Machines shouldn't be given human names (PLATO is a borderline case).
I know this is like talking to a wall -- money requires hype -- but still, please stop doing that.
> Actual language used by the ML researchers: "Intuitive physics learning in a deep-learning model inspired by developmental psychology"
In my opinion, this is still anthropomorphizing the algorithms. The term deep-learning is a poor representation of what actually goes on. Someone please correct me if I'm wrong, but all ML does is statistical regressions (in essence). It doesn't "learn" like a person learns. Neural networks are not actually like brains (as far as we understand how the brain works).
I feel like the whole industry is inundated with aphorisms that are kind of true, but not wholly true. Evolutionary algorithms, neural networks, deep learning, deep mind, this stuff all reeks of anthropomorphizing fundamentally mathematical processes. I get it, it's a lot easier to get the gist of "the computer is learning/training" than "the computer is refining the weights and biases to try to optimize the output".
Well it's not called person-learning-machine is it? Why would something have to learn "like a person" to be able to use the word "learn", those two concepts are not attached one to the other. If they were, saying "learn like a person" would be a pleonasm, yet it isn't.
IMHO "learning" is a fine term, it conveys the idea of what is happening effectively and quickly.
Also, we don't know how a person learns anyway, it might very well be a similar process, just way more efficient and complex.
> Evolutionary algorithms
How would you propose calling them? You have a generation of agents, each with their own specificities, and from the agents most successful to accomplish the task at hand, we derive a new generation, slightly modified from their parent's.
It seems to me "evolution" is again the most suitable and efficient way of describing what is happening.
While I agree that there is definitely too much anthropomorphizing surrounding AI, I feel you are going way too far in the opposite direction. Not every word that can be composed with natural process/humans should be banned from being used anywhere else.
Who cares? Why should you hold the idea that it would? They are systems moulded after data, 'learn' seemed to be a decent label. If it is not, it is because "learning" is _active_, by philological analysis, and happily consistently with an aim of AGI (intelligent entities learn actively).
A computer does not compute like a human would. Yet, no problem.
For that matter, you are using 'person' in a very individual way - not even "personal" (a "person" learns according to individual nature, while you are using it as a collective term).
As already expressed - nearby I wrote 'biomimicry' -, what you are calling "anthropomorphizing" is a wrong direction: "evolutionary algorithms" were born out of keys after the observation of the natural world, and the terms express that - it is not that you saw the algorithm and went "It looks like my uncle Oscar"¹ (this side is active - it "learns").
(¹Those anthropomorphizing Hollywood cultists and all that sculpture...)
> a "person" learns according to individual nature, while you are using it as a collective term
There's a very specific definition for learn:
>> to gain knowledge or understanding of or skill in by study, instruction, or experience[0]
There's a few more, but none of the definitions treat learn in a non-collective way. I guess meriam Websters dictionary doesn't like treating people as individuals or something lol.
Additionally, all the definitions there are speaking in human contexts. They talk about learning in the sense of being taught, or gaining experience, or gaining knowledge. Sure a computer kind of does this stuff, but it doesn't really. And that falls into the category of attributing human characteristics to an inanimate object.
I probably shouldn't have said that everything in the short list I wrote reeked of anthropomorphizing processes. But the evolutionary algorithm was more in line with what I mentioned immediately before. My whole comment read:
> I feel like the whole industry is inundated with aphorisms that are kind of true, but not wholly true. Evolutionary algorithms, neural networks, deep learning, deep mind, this stuff all reeks of anthropomorphizing fundamentally mathematical processes.
An evolutionary algorithm definitely falls into the category of kind of true but not wholly true. But it's not anthropomorphic.
> intelligent entities learn actively
Also, this is a very loaded statement. What is an intelligent entity? If you Google "is a computer intelligent" there are various papers, articles, and other pieces of media all claiming that we can't call a computer intelligent, and some claiming that we can consider certain algorithms somewhat intelligent. This is anything but an accepted standard today.
Give us an example of some relevant label that would be "«wholly true»" instead of "«just kind of true»". Because metaphors, and the whole system of fuzzy pattern relations, are based on fuzzy pattern relations.
> none of the definitions treat learn in a non-collective way
You have misunderstood my post. I would prefer that you read it again.
You are complaining about loose use of the language: I noted that you yourself used the term 'person' more than loosely, with a dubious jump. When one says '«like a person learns»', that is supposed to be "like a specific individual in his own individual characteristics will learn" - instead you used to say "like people in general learn". A "person" is a "definite form", not a general individual representing common features - it is the opposite.
> very loaded statement
Which you are taking out of context. I said that you have to call the moulding of your functions something, and that "learn" seems a very acceptable term, since it is bottom-up instead of top-down, it is automated instead of encoded: it is developed against data, it "learns". And that if the term is disliked, there could be a very good reason, because 'learn' was born as a sort of a hunting term¹ - it really means something like "investigate" -, which is a happy coincidence because what is largely missing in AI is critical thinking, part of the active process of learning ("learning" is active as investigation is). And the day John will have to check «accepted standard[s]» to see how things are, I will be willing to comply to his sad request for mercy.²
¹Irregardless of what the Merriam-Webster will write, because you get a none-the-wiser relative notion but not knowledge from a "dictionary of use", as at the paragraph for 'life' you will not find the meaning of life.
²John must be, tautologically, an "active learner". (He will check personally.)
You're right that journalists use anthropomorphization much more. But AI researchers also have a long history of choosing terms that are anthropomorphizing or animating. Here the name PLATO -- which evokes an image of an ancient philosopher, a human, who is by cultural tradition considered smart -- is used in the original journal article.
Terms like "neural network" and "artificial intelligence" are frequently used by AI engineers and researchers despite the obvious image they evoke. Sometimes they even call their creations "brains". Also note the name DeepMind.
To add to that, often EDITORS are the ones who come up with the titles, for reasons beyond being clear, like using words that draw attention and to fit in a specific space.
My pet peeve is when AI researchers coin new terms for objects that can be described by well-established mathematical terms. For example, saying a neural network layer has "256 units" instead of "output dimensionality of 256".
But at some point you need to name things for brevity. I understand why people say "activation function" instead of "elementwise monotonic nonlinear function".
Misuse is also rampant, like using "inference" to describe evaluating a neural network on an input, even when the NN isn't part of a probabilistic model.
To be fair, give high degrees of interdisciplinarity and imperfect acquaintance with all the terminologies (and imperfect memory), and given that we mix natural language and conventional technical language, and with some continuity, and given that natural language itself mixes original core root meanings and posterior conventions, and given that even biologically the best term may be occasionally (polysense) hard to find, the mess is expected.
> artificial neural networks aren't like actual neural networks or brains
Just to zoom right in on neural networks:
People often say this, and I never see a solid argument.
I know very little about biological neural networks.
Clearly they are very different in some respects, for example, meat vs silicon.
But I never see a good argument that there's no perspective from which the computational structure is similar.
Yes, the low level structure, and the optimization is different, but so? You can run quicksort on a computer made of water and wood, or vaccum tubes, or transistors, and it's still quicksort.
Are we sure there aren't similarities in terms of how the various neural networks process information? I would be interested in argument for this claim.
After all, the artificial neural networks are achieving useful high level functionality, like recognizing shapes.
There are many ways one can argue for or against this comparison. This is mostly a matter of terminology. However the problem is that the field of AI has been for many decades consistently shaping its language to evoke human-like connotations in order to boost hype. This article's title is a yet another example of that.
There are a few conceptual differences where artificial neural networks conceptually diverge for computational reasons.
One is the notion of time and connectivity loops - overwhelmingly, ANNs use a feed-forward architecture where the network is a directional graph without loop and some input is transformed to some output in a single pass - and weights can be adjusted in a single reverse pass, which is very practical for training. We do know that biological brains have some behavior that relies on signals "looping through" the neurons, and that is fundamentally different from, for example, running some network iteratively (like generating text word-by-word via GPT-3). We have artificial neural network simulations that do things like this, and also simulations of "spike-train" networks (which can model other time-related aspects which glorified perceptrons can't), but we don't use them in practice since the computational overhead means that for most common ML tasks we can get better performance by using an architecture that's easy to compute and allows to use a few orders of magnitude more parameters, as size matters more.
It is not the case - this is just biomimicry: "let us try imitating feats of a living organism". Perfectly legitimate. Nobody is told to make unduly images out of it.
"DeepMind AI learns simple physics like a baby" clearly makes an unduly image out of it. Calling it PLATO evokes an image of an ancient human philosopher. No other field uses as many bold comparisons to humans as artificial intelligence (its name alone is one).
But you are supposed "not to evoke": that'd be sensational.
"The name of AI": we call "intelligent" in this convention that which finds solutions - normally the natural intelligence of a professional, sometimes the artificial intelligence of a computerized system. As easy as that. It works, no intrinsic issue.
This experiment: children seem to rely on expectations in learning and this ANN based system tries to implement some form of "expectation based learning". No problem.
A similar question would be: "Has the last human that will make 'Einstein' level discoveries already been born?"
This leaves room for some 10 year old to do so in the near future, but places some kind of event horizon for AI achievement eclipsing human achievement closer than about 25 years into the future.
At this stage the most relevant tools aiding research (unless I missed something discreet) have the long-known issue of "transparency", or lack thereof, or that solutions may work but for the growth of knowledge your actual interest would be "how and why".
I think it's impressive what the DeepMind team comes up with, but the inflationary use of the term Artificial Intelligence is annoying.
This should be reserved to something that is truly intelligent. As long as this has not been achieved it's just simply a well trained model that results in high accuracy predictions for very narrowly defined tasks.
> This should be reserved to something that is truly intelligent
No. I had to write this so many times. It's pretty clear: at some point, we decided "Engineer, if you come up with a solution we will call you intelligent, and if your algorithm does we will call it intelligent as well".
There exists an interpretation of the term that makes the term sound.
It's pretty clear, really - and that "intelligence [tongue-in-cheek] [of specific solvers]" is _not_ "[intellectual] Intelligence", which by the way is not even the last and highest meaning of it.
It is the second time in this very page: «we call "intelligent" in this convention that which finds solutions - normally the natural intelligence of a professional, sometimes the artificial intelligence of a computerized system».