• 0 Posts
  • 35 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle
  • We don’t have the same problems LLMs have.

    LLMs have zero fidelity. They have no - none - zero - model of the world to compare their output to.

    Humans have biases and problems in our thinking, sure, but we’re capable of at least making corrections and working with meaning in context. We can recognise our model of the world and how it relates to the things we are saying.

    LLMs cannot do that job, at all, and they won’t be able to until they have a model of the world. A model of the world would necessarily include themselves, which is self-awareness, which is AGI. That’s a meaning-understander. Developing a world model is the same problem as consciousness.

    What I’m saying is that you cannot develop fidelity at all without AGI, so no, LLMs don’t have the same problems we do. That is an entirely different class of problem.

    Some moon rockets fail, but they don’t have that in common with moon cannons. One of those can in theory achieve a moon landing and the other cannot, ever, in any iteration.


  • If all you’re saying is that neural networks could develop consciousness one day, sure, and nothing I said contradicts that. Our brains are neural networks, so it stands to reason they could do what our brains can do. But the technical hurdles are huge.

    You need at least two things to get there:

    1. Enough computing power to support it.
    2. Insight into how consciousness is structured.

    1 is hard because a single brain alone is about as powerful as a significant chunk of worldwide computing, the gulf between our current power and what we would need is about… 100% of what we would need. We are so woefully under resourced for that. You also need to solve how to power the computers without cooking the planet, which is not something we’re even close to solving currently.

    2 means that we can’t just throw more power or training at the problem. Modern NN modules have an underlying theory that makes them work. They’re essentially statistical curve-fitting machines. We don’t currently have a good theoretical model that would allow us to structure the NN to create a consciousness. It’s not even on the horizon yet.

    Those are two enormous hurdles. I think saying modern NN design can create consciousness is like Jules Verne in 1867 saying we can get to the Moon with a cannon because of “what progress artillery science has made in the last few years”.

    Moon rockets are essentially artillery science in many ways, yes, but Jules Verne was still a century away in terms of supporting technologies, raw power, and essential insights into how to do it.


  • You’re definitely overselling how AI works and underselling how human brains work here, but there is a kernel of truth to what you’re saying.

    Neural networks are a biomimicry technology. They explicitly work by mimicking how our own neurons work, and surprise surprise, they create eerily humanlike responses.

    The thing is, LLMs don’t have anything close to reasoning the way human brains reason. We are actually capable of understanding and creating meaning, LLMs are not.

    So how are they human-like? Our brains are made up of many subsystems, each doing extremely focussed, specific tasks.

    We have so many, including sound recognition, speech recognition, language recognition. Then on the flipside we have language planning, then speech planning and motor centres dedicated to creating the speech sounds we’ve planned to make. The first three get sound into your brain and turn it into ideas, the last three take ideas and turn them into speech.

    We have made neural network versions of each of these systems, and even tied them together. An LLM is analogous to our brain’s language planning centre. That’s the part that decides how to put words in sequence.

    That’s why LLMs sound like us, they sequence words in a very similar way.

    However, each of these subsystems in our brains can loop-back on themselves to check the output. I can get my language planner to say “mary sat on the hill”, then loop that through my language recognition centre to see how my conscious brain likes it. My consciousness might notice that “the hill” is wrong, and request new words until it gets “a hill” which it believes is more fitting. It might even notice that “mary” is the wrong name, and look for others, it might cycle through martha, marge, maths, maple, may, yes, that one. Okay, “may sat on a hill”, then send that to the speech planning centres to eventually come out of my mouth.

    Your brain does this so much you generally don’t notice it happening.

    In the 80s there was a craze around so called “automatic writing”, which was essentially zoning out and just writing whatever popped into your head without editing. You’d get fragments of ideas and really strange things, often very emotionally charged, they seemed like they were coming from some mysterious place, maybe ghosts, demons, past lives, who knows? It was just our internal LLM being given free rein, but people got spooked into believing it was a real person, just like people think LLMs are people today.

    In reality we have no idea how to even start constructing a consciousness. It’s such a complex task and requires so much more linking and understanding than just a probabilistic connection between words. I wouldn’t be surprised if we were more than a century away from AGI.


  • Actually, factually, in truth, for real, fr, no joke, seriously, absolutely, in fact, truly, truthfully, really, in reality, literally, I couldn’t tell you, because only you can figure out how to say what you want to say, and if you didn’t already know that there are countless ways to say that, that makes me wonder if you actually care very much about the topic, for realsies, in actuality. All you need to do is spend a few seconds thinking of another way to say it and you can answer your own question.

    It’s language. Of course we have ways of saying things, that’s what it’s for. Also you can say “literally” to mean “actually” as long as you understand how to say it in context, and the fact that you can correct people who you believe are using it wrong is a sign that you can tell the difference and you don’t need to correct them.

    And if we don’t have a way of saying something, you can invent one, because that’s how language works. People who tell you that there’s some authoritative measure by which we know what words mean don’t actually care about language. They’re trying to kill our language, because a living language can’t be controlled like they want. The good news is that it’s impossible to achieve that goal.

    Like if you’re not going around lamenting the fact that “terrific” doesn’t mean “terrifying” anymore then maybe it’s okay if words change. It sounds like you survived that particular tragedy.


  • “Literally” literally means “as written”, or “in the literature”.

    To use the word “literally” to mean “in reality” or “in fact” is not that original meaning, but is literally - in fact, as well as a written thing - a figurative meaning.

    Language changes. There are plenty of words that are their own antonyms. It’s not sad, it’s inevitable, and the sooner you can accept that the sooner you can avert the fate of becoming an old man yelling at clouds.


  • Little Brother is a novel about a future dystopia where copyright laws have been allowed free rein to destroy people’s lives.

    It’s legislated that only “secure” hardware is allowed, but hardware is by definition fixed, which means that every time a vulnerability is found - which is inevitable - there is a hardware recall. So the black market is full of hardware which is proven to have jailbreaking vulnerabilities.

    Just a glimpse of where all this “trusted”, “secure” computing might lead.

    As a short video I saw many years ago explained on the concept: “trust always depends on mutuality, and they already decided not to trust you, so why should you trust them?”

    Edit: holy shit, it’s 15 years old, and “anti rrusted computing video dutch voice over” (turns out the guy is German actually) was enough to find it:

    https://www.lafkon.net/work/trustedcomputing/



  • Well sure, psychos with nukes are scary and there may be an element of the US losing control of their attack dog, but the idea that the Israelis are calling the shots is not really accurate. It cuts dangerously close to the antisemitic conspiracy theory that Israel through AIPAC controls the US government, which also conveniently lets the US off the hook for sponsoring them.

    I just think it’s important to remember that for all their brutality, Israel is not charge, and they are not the military superpower that has dominated much of global politics for the past century. They are a vassal state and they would listen if anyone in power had the will to tell them no, they are far too dependent on US support.




  • Everyone should read the words of one of Nixon’s top aides regarding why they started the war on drugs:

    “The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people,” former Nixon domestic policy chief John Ehrlichman told Harper’s writer Dan Baum for the April cover story published Tuesday.

    “You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities,” Ehrlichman said. “We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”

    Source

    And yet the wikipedia entry for the war on drugs starts like this:

    The war on drugs is a global anti-narcotics campaign led by the United States federal government, including drug prohibition, foreign assistance, military intervention, and counterterrorism, with the aim of reducing the illegal drug trade in the US.

    Emphasis added. I would say that is flat out false. It may be the openly stated aim, but it is not the true aim, and they’ve admitted it. Just another reminder that these fascists know exactly what the fuck they’re doing and that wikipedia is subject to propaganda influences.





  • Because issuing a prohibition is basically always a punishment of the people to distract from who is actually causing the problem.

    From a political standpoint it very much is either/or, this is done to exhaust any momentum towards systemic change.

    “Ban the children to protect them” is an extremely shortsighted way to approach any policy or social ill. Kids will find a way to access social media, and this ban means they’ll need to do it in secret. So now anybody preying on them through those means has their implicit cooperation in covering up the abuse. That includes the media platforms themselves.

    Also, why would you need to ban children from social media if the addictive strategies were under control?

    A ban like this is only going to cause harm.


  • More power to the rear makes sense because you get more traction at the rear under normal acceleration, not just when carrying a load. It’s pretty typical of electric cars to do this, just like it’s typical to have bigger brakes on the front of all cars, because there’s more traction at the front under braking.

    There’s also the issue of torque vectoring. Without a differential, torque vectoring is essential, but under acceleration torque vectoring to the rear wheels is much more effective than to the front wheels, so that’s another reason to split the rear power but not the front.