• 0 Posts
  • 57 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle



  • I remember in the late 90s the Green Party in my district was on a roll, culminating in the election of a member to the California State Assembly (one of the highest posts ever held by the Greens in the US). Then came Nader’s presidential bid and its perceived role in the election of Bush, which permanently crippled the legitimacy of the local party. They’re still doing great work with voter guides, legislative analysis, etc.; but they’ll never escape the shadow of Nader and Stein.

    I think the only viable path for a third party now is to start a new one from scratch, and disavow presidential bids from the outset.




  • Regardless of how the image was generated, why is Google treating a random blogspam site as the authoritative version of a work of art over (say) Wikipedia?

    According to the article:

    As 404 Media has reported in January, Google is regularly surfacing AI-generated websites that game search engine optimization before the human-made websites they are trained on. “Our focus when ranking content is on the quality of the content, rather than how it was produced,” Google told 404 Media in a statement at the time.

    Does that mean I can search for any famous image, take the largest existing version, upscale it by 1% and post it on my own site, and instantly be featured at the top of google searches?










  • The underlying fallacy, IMO, is that people think the purpose of elections is to send a message to the government, instead of choosing the government (and that all political problems can be solved by sending the right message).

    The best way to approach an election is to determine the most likely scenario in which your vote would actually decide the outcome (which in practice means a choice between the two frontrunners in a FPTP system), and then consider what difference that would make in terms of actual policy (rather than symbolism).

    And recognize that this alone won’t fix all the problems with government—that will require other types of involvement beyond voting.




  • AbouBenAdhem@lemmy.worldtoAI@lemmy.mlDo I understand LLMs?
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    10 days ago

    There’s a part of our brain called the salience network, that continually models and predicts our environment and directs our conscious attention to things it can’t predict. When we talk to each other, most of the formal content is predictable, and the salience network filters it out; the unpredictable part that’s left is the actual meaningful part.

    LLMs basically recreate the salience network. They continually model and predict the content of the text stream the same way we do—except instead of modeling someone else’s words so they can extract the unpredictable/meaningful part, they model their own words so they can keep predicting the next ones.

    This raises an obvious issue: when our salience networks process the stream of words coming out of such an LLM, it’s all predictable, so our brains tell us there’s no actual message. When AI developers ran into this, they added a feature called “temperature” that basically injects randomness into the generated text—enough to make it unpredictable, but not obvious nonsense—so our salience networks will get fooled into thinking there’s meaningful content.