• dan@upvote.au
      link
      fedilink
      English
      arrow-up
      26
      ·
      edit-2
      4 hours ago

      It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    42
    ·
    5 hours ago

    The restructuring could turn the already for-profit company into a more traditional startup and give CEO Sam Altman even more control — including likely equity worth billions of dollars.

    I can see why he would want that, yes. We’re supposed to ooo and ahh at a technical visionary, who is always ultimately a money guy executive who wants more money and more executive power.

    • toynbee@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      I saw an interesting video about this. It’s outdated (from ten months ago, apparently) but added some context that I, at least, was missing - and that also largely aligns with what you said. Also, though it’s not super evident in this video, I think the presenter is fairly funny.

      https://youtu.be/L6mmzBDfRS4

  • celsiustimeline@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    49
    ·
    edit-2
    6 hours ago

    Whoops. We made the most expensive product ever designed, paid for entirely by venture capital seed funding. Wanna pay for each ChatGPT query now that you’ve been using it for 1.5 years for free with barely-usable results? What a clown. Aside from the obvious abuse that will occur with image, video, and audio generating models, these other glorified chatbots are complete AIDS.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      2
      ·
      5 hours ago

      paid for entirely by venture capital seed funding.

      And stealing from other people’s works. Don’t forget that part

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          3 hours ago

          When individual copyright violations are considered “theft” by the law (and the RIAA and the MPAA), violating copyrights of billions of private people to generate profit, is absolutely stealing. While the former arguably is arguably often a measure of self defense against extortion by copyright holding for-profit enterprises.

      • Madis@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        5 hours ago

        Serious question though, has any other company matched their 4o model yet? Maybe Claude?

        • CaptSneeze@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          4 hours ago

          I’ve been using Claude pretty heavily for the last couple of months and have been very satisfied. More satisfied than I was with ChatGPT for mostly helping me cobble together various powershell scripts, or troubleshoot complicated and complex excel formulas. The latter, I am often doing as part of my job, and have been for a decade. So, when I run into trouble it’s usually deeep in the weeds, and Claude has saved me several hours of manual investigation by pointing me quickly to the problem areas to examine. The only thing I wish it had is image generation, but that would mostly just be for making joke images to send to friends and coworkers.

          Edit to add: While I do prefer the info I receive from Claude more than ChatGPT for my use, I think it’s actually the interface that I find much more useful. I forget what they call the programming interface that you turn on in settings somewhere, but I really like how it breaks out all the code on the right side, separate from the conversation.

  • JustARaccoon@lemmy.world
    link
    fedilink
    English
    arrow-up
    88
    ·
    9 hours ago

    I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model? Weren’t a lot of the advantages (like access to data and scraping) given with the stipulation that it’s for a non-profit? This sounds like it should be illegal to my brain

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      24
      ·
      5 hours ago

      I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model

      Money

    • FatCrab@lemmy.one
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      Their non-profit status had nothing to do with the legality of their training data acquisition methods. Some of it was still legal and some of it was still illegal (torrenting a bunch of books off a piracy site).

    • berno@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      7 hours ago

      Careful you’re making too much sense here and overlapping with Elmo’s view on the subject

      • TachyonTele@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        4 hours ago

        Money doesn’t have any advantages in other countries? When did that happen?

            • affiliate@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 minutes ago

              the person that you’re replying to said something that’s true about the USA. they didn’t say anything about other countries.

              for another example, i can say “if you’re in the USA, then the current year is 2024” and that statement will be true. it is also true in every other country (for the moment), but that’s besides the point.

  • sudo42@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    7 hours ago

    Sam Altman is demonstrating the power of AI. He’s showing how a single CEO can fire the entire company and continue to develop the product to be even better than when humans were involved.

    “OpenAI. No real humans involved!” ™

    • PugJesus@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 hours ago

      The actual employees threatened to resign en masse, because the employees own equity in the company and want this dogshit move too.

  • pjwestin@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    ·
    9 hours ago

    I really don’t understand why they’re simultaneously arguing that they need access to copyrighted works in order to train their AI while also dropping their non-profit status. If they were at least ostensibly a non-profit, they could pretend that their work was for the betterment of humanity or whatever, but now they’re basically saying, “exempt us from this law so we can maximize our earnings.” …and, honestly, our corrupt legislators wouldn’t have a problem with that were it not for the fact that bigger corporations with more lobbying power will fight against it.

    • Dkarma@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 hours ago

      There is no law that covers training.
      You guys are the ones demanding a law that doesn’t exist.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    1
    ·
    10 hours ago

    Sounds like another WeWork or Theranos in the making, except we already know the product doesn’t do what it promises.

    • lando55@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      9 hours ago

      What does it actually promise? AI (namely generative and LLM) is definitely overhyped in my opinion, but admittedly I’m far from an expert. Is what they’re promising to deliver not actually doable?

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        4 hours ago

        They want AGI, which would match or exceed human intelligence. Current methods seem to be hitting a wall. It takes exponentially more inputs and more power to see the same level of improvement seen in past years. They’ve already eaten all the content they can, and they’re starting to talk about using entire nuclear reactors just to power it all. Even the more modest promises, like pictures of people with the correct number of fingers, seem out of reach.

        Investors are starting to notice that these promises aren’t going to happen. Nvidia’s stock price is probably going to be the bellwether.

      • naught101@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        9 hours ago

        It literally promises to generate content, but I think the implied promise is that it will replace parts of your workforce wholesale, with no drop in quality.

        It’s that last bit that’s going to be where the drama happens

      • Smokeydope@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        15
        ·
        edit-2
        7 hours ago

        It delivers on what it promises to do for many people who use LLMs. They can be used for coding assistance, Setting up automated customer support, tutoring, processing documents, structuring lots of complex information, a good generally accurate knowledge on many topics, acting as an editor for your writings, lots more too.

        Its a rapidly advancing pioneer technology like computers were in the 90s so every 6 months to a year is a new breakthrough in over all intelligence or a new ability. Now the new llm models can process images or audio as well as text.

        The problem for openAI is they have serious competitors who will absolutely show up to eat their lunch if they sink as a company. Facebook/Meta with their llama models, Mistral AI with all their models, Alibaba with Qwen. Some other good smaller competiiton too like the openhermes team. All of these big tech companies have open sourced some models so you can tinker and finetune them at home while openai remains closed sourced which is ironic for the company name… Most of these ai companies offer their cloud access to models at very competitive pricing especially mistral.

        The people who say AI is a trendy useless fad don’t know what they are talking about or are upset at AI. I am a part of the local llm community and have been playing around with open models for months pushing my computers hardware to its limits. Its very cool seeing just how smart they really are, what a computer that simulates human thought processes and knows a little bit of everything can actually do to help me in daily life.

        Terrence Tao superstar genius mathematician describes the newest high end model from openAI as improving from a “incompentent graduate” to a “mediocre graduate” which essentially means AI are now generally smarter than the average person in many regards.

        This month several comptetor llm models released which while being much smaller in size compared to openai o-1 somehow beat or equaled that big openai model in many benchmarks.

        Neural networks are here and they are only going to get better. Were in for a wild ride.

        • exanime@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          5 hours ago

          It delivers on what it promises to do for many people who use LLMs.

          Does it though?

          They can be used for coding assistance,

          They promised no programmers needed in 5 years. (well not promised, somebody did say that but not OpenAI staff, I think). The cost of AI both in money and energy use, does not really justify the limited aid it can provide to a programmer. You are never getting enough additional efficiency from said programmer to justify those costs

          Setting up automated customer support,

          Even more hated than when every customer centre moved to India

          tutoring, processing documents, structuring lots of complex information,

          Again, at that cost? the marginal improvement does not add up

          a good generally accurate knowledge on many topics,

          Is it though? if I can only trust it with answers I already know enough to discern whether I am getting bullshit or not, then it’s not worth it. As it it today, I cannot trust it with any search I really do not know the answer to (or can easily verify) as it can be throwing complete bullshit at me and I would have no way of knowing either.

          acting as an editor for your writings, lots more too.

          Again? you mentioned the processing docs already… but again I tell you, who will pay the heavy costs just so internal memos are written slightly better? and everything your company sends out would have to be reviewed as you do not want AI promising something you cannot deliver via hallucination

          • Bongles@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 hour ago

            You keep mentioning cost, and in the grand scale of “there’s no such thing as a free lunch” there’s a large cost but for users, they’re just paying for a license from Microsoft to have copilot in their visual studio software or in M365 apps, etc.

            So for helping with development, it’s really not that expensive for the users. Also, “they” make lots of ridiculous claims, and i don’t know who said it, but no developers in 5 years is a wild claim that no one should’ve thought was real.

        • Stegget@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          2
          ·
          8 hours ago

          My issue is that I have no reason to think AI will be used to improve my life. All I see is a tool that will rip, rend and tear through the tenuous social fabric we’re trying to collectively hold on to.

          • Smokeydope@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            5
            ·
            edit-2
            7 hours ago

            A tool is a tool. It has no say in how it’s used. AI is no different than the computer software you use browse the internet or do other digital task.

            When its used badly as an outlet for escapism or substitute for social connection it can lead to bad consequences for your personal life.

            When it’s best used is as a tool to help reason through a tough task, or as a step in a creative process. As on demand assistance to aid the disabled. Or to support the neurodivergent and emotionally traumatized to open up to as a non judgemental conversational partner. Or help a super genius rubber duck their novel ideas and work through complex thought processes. It can improve peoples lives for the better if applied to the right use cases.

            Its about how you choose to interact with it in your personal life, and how society, buisnesses and your governing bodies choose to use it in their own processes. And believe me, they will find ways to use it.

            I think comparing llms to computers in 90s is accurate. Right now only nerds, professionals, and industry/business/military see their potential. As the tech gets figured out, utility improves, and llm desktops start getting sold as consumer grade appliances the attitude will change maybe?

            • exanime@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              4 hours ago

              A tool is a tool.

              That is a miopic view. Sure a tool is a tool, if I take a gun and use it to save someone from getting mugged = good if I use it to mug someone = bad

              But regardless of the circumstance of use, we can all agree that a gun’s only utility is to destroy a living organism.

              You know, I know, everyone here knows, AI will only be used to generate as much profit as possible in the shortest amount of time, regardless of the harm it causes. And right now, the big promise of AI is that it will replace costly human employees, that’s it, that’s all.

              Fortunately, it is really bad and unlikely to achieve this goal

            • AA5B@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              7 hours ago

              A better analogy is search engines. It’s just another tool, but

              • at their best enable your I to find anything from all the worlds knowledge
              • at their worst, are just another way to serve ads and scams, random companies vying for attention, they making any attention is good attention, regardless of what you’re looking for

              When I started as a software engineer, my detailed knowledge was most important and my best tool was the manuals. Now my most important tools are search engines and autocomplete: I can work faster with less knowledge of the syntax and my value is the higher level thought about what we need to do. If my company ever allows AI, I fully expect it to be as important a tool as a search engine.

      • vane@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 hours ago

        But their operation cost is 5 billions per year, they plan to raise 6.5 billions from microsoft, apple and nvidia this year and they have not raised it yet. If their model fail next year and sales not happen will shareholders of big 3 pay 6.5 billions in 2026. There were couple companies that raised such amount of money at start like for example Docker Inc. Where is Docker now in enterprise ? They needed to change licensing model to even survive and their operation cost is just storage of docker containers. I doubt openai will survive this decade. Sam Altman is just preparing for Microsoft takeover before the ship is sunk.

  • Aceticon@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    11 hours ago

    What! A! Surprise!

    I’m shocked, I tell you, totally and utterly shocked by this turn of events!

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    11 hours ago

    Oh shit! Here we go. At least we didn’t hand them 20 years of personal emails or direct interfamily communications.