Emergent Properties

#1
by Xecut - opened

"Intelligence can be mimicked at small scale — but not yet achieved."

The way I see this is that the property of emergent behavior increases with parameter count. 1B -> 7B -> 14B -> 30B -> 70B -> 100B

Maybe the way to solve small scale intelligence is to reduce the problem set. Allow more room for emergence. Something not trying to achieve human intelligence, but something closer to actual lab mice.

AxionLab Co. org

I like that perspective, especially the idea of reducing the problem space.

But I’d push back a bit on my own earlier statement: I don’t think small-scale intelligence is just mimicry anymore.

It seems more accurate to say it’s constrained rather than absent.

While Emergent behavior becomes more pronounced with scale, approaches like Reinforcement Learning and Self Play suggest that meaningful behavior can still emerge in smaller systems given the right structure and environment.

So instead of “not yet achieved”, I’d frame it as “achieved within limits”.

I was having a chat with my local LLM and think this is relevant here.

"The pattern is structural parallelism: both mathematics and physics use axiomatic or theoretical frameworks where solutions (proofs or unified theories) emerge from constraints, and the absence of a current solution does not imply impossibility—it only indicates incomplete knowledge or unresolved constraints."

Another way to look at the problem space could be (metaphorically) a large house where the problem space is always the same, but a mouse that can go anywhere in the house, even through the walls.

Sign up or log in to comment