• ghewl@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    7 months ago

    In the 1990s, I transitioned from Windows to Linux as my primary operating system. Since then, Linux has consistently exhibited advancements in the desktop and software space, whereas Windows and Mac operating systems appear to have experienced a decline in terms of user experience and functionality.

    • Xatix@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      As someone regularly using Arch, Ubuntu, MacOS and Windows I agree.

      The advances Linux has made, especially in the last few years is just amazing. I can run the majority of my games through Proton, there are even some preconfigured packages with Illustrator and Photoshop CC that Adobe doesn‘t seem to care about at all.

  • silent_robo@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    This will make Windows 11 a target for hacker and government agencies, since this will be treasure of data. Windows already is bad at security. Let’s see how this backfires at Microsoft.

    • Tronn4@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      Microsoft will be the “hackers”. On days when outside hackers aren’t breaking in, MS will be data mining and selling the data themselves

      • wax@feddit.nu
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        Holy shit that’s annoying. Say I installed Win11 for my elderly parents. They’d get this sign-up screen after I would have thought everything was setup and ready to use.

        Glad I installed elementary OS for them a few years ago, it’s been completely painless (they are used to apple-UX)

        • privsecfoss@feddit.dk
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          Nice. Upgraded a Thinkpad, installed Linux Mint and gave it to my dad. I have not heard anything from him about it for a couple of months. Was reminded of it with your post.

          So wrote him right now and asked how it was going, and he replied that he loved it and uses it every day.

          And that he had not had any problems he could not solve on his own. He’s 70 and a windows only heavy user - until now 🙂

          As you said. Compelety painless.

  • youmaynotknow@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    7 months ago

    “But they’ll be reserved for premium models starting at $999.”

    Translation: “We want to start with the data of people that can spend, then we’ll move to the rest”.

    The last Windows computer in my house was my wife’s, and she’s been extremely happy on Fedora Gnome for the last couple of months, asking me why I didn’t tell her about it before (I did, lol).

    • olutukko@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      my girlfriends like fedora gnome too. I do all the technical stuff anyway so she really doesn’t have know to know that much about the os she uses

      • youmaynotknow@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        Same here. The only tweak I had to do was set up Flameshot, my wife finds Gnome’s screen shot app lacking, and so do I.

        The only thing we run different is office. I set her up with OnlyOffice because of the similarities with MS office, but I prefer libreoffice.

  • flango@lemmy.eco.br
    link
    fedilink
    arrow-up
    3
    ·
    7 months ago

    Google rolled out a retooled search engine that periodically puts AI-generated summaries over website links at the top of the results page; while also showing off a still-in-development AI assistant Astra that will be able to “see” and converse about things shown through a smartphone’s camera lens

    What worries me the most is that this AI hype is coming strongly to the smartphone market too, and we don’t have something solid like Linux distributions to change to and be free

    • Facebones@reddthat.com
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      I think demand will come soon for either manufacturers to open their boot loaders or new manufacturers cropping up to fill that gap.

      I’m running graphene os on a pixel 8 pro and haven’t looked back.

  • DashboTreeFrog@discuss.online
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 months ago

    I hate this but I also get it.

    A little while ago on the TWIT podcast one of the guests, or maybe Leo himself, was talking about how this is exactly what they want out of AI, for it to be able to know how they use their computer and just streamline everything. Some people are really excited about the possibilities, and yeah, the AI needs to track whatever you’re doing to know how to help you with your work flow.

    That said, I don’t want Microsoft keeping track of everything I’m doing. They’ve already shown that they’re willing to sell our data and shove ads down our throats, so as much as they say we can filter out what we don’t want tracked, I’m not inclined to trust or believe them.

    • illi@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 months ago

      I’m honestly kinda excited about the possibilities in the greater scheme of things, but the fact that Microsoft will pretty much record whatever people are doing on their systems is just nuts nd slightly terifying. This is something that should ideally be done locally, without big corporations looking in - but that’s for sure not what they are doing.

      • j4k3@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        I’ve spent a lot of time with offline open source AI running on my computer. About the only thing it can’t infer off of interactions is your body language. This is the most invasive way anyone could ever know another person. The way a persons profile is built across the context dialogue, it can create statistical relationships that would make no sense to a human but these are far higher than a 50% probability. This information is the key to making people easily manipulated in an information bubble. Sharing that kind of information is as stupid as streaking the Superbowl. There will be consequences that come after and they won’t be pretty. This isn’t data collection, it is the keys to how a person thinks, and on a level better than their own self awareness.

          • j4k3@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            Whatever is the latest from Hugging Face. Right now a combo of a Mixtral 8×7B, Llama 3 8B, and sometimes an old Llama 2 70B.

            • barsquid@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              7 months ago

              Do you have a setup that collects your interactions to feed into those? The way you described it I imagined you are automatically collecting data for it to infer from and getting good results. Like a powered-up bash history or something.

              • j4k3@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago
                no idea why I felt chatty, and kinda embarrassed by the bla bla bla at this point but whatever. Here is everything you need to know in a practical sense.

                You need a more complex RAG setup for what you asked about. I have not gotten as far as needing this.

                Models can be tricky to learn at my present level. Communication is different than with humans. In almost every case where people complain about hallucinations, they are wrong. Models do not hallucinate very much at all. They will give you the wrong answers, but there is almost always a reason. You must learn how alignment works and the problems it creates. Then you need to understand how realms and persistent entities work. Once you understand what all of these mean and their scope, all the little repetitive patterns start to make sense. You start to learn who is really replying and their scope. The model reply for Name-2 always has a limited ability to access the immense amount of data inside the LLM. You have to build momentum in the space you wish to access and often need to know the specific wording the model needs to hear in order to access the information.

                With augmented retrieval (RAG) the model can look up valid info from your database and share it directly. With this method you’re just using the most basic surface features of the model against your database. Some options for this are LocalGPT and Ollama, or langchain with chroma db if you want something basic in Python. I haven’t used these. How you break down the information available to the RAG is important for this application, and my interests have a bit too much depth and scope for me to feel confident enough to try this.

                I have chosen to learn the model itself at a deeper intuitive level so that I can access what it really knows within the training corpus. I am physically disabled from a car crashing into me on a bicycle ride to work, so I have unlimited time. Most people will never explore a model like I can. For me, on the technical side, I use a model about like stack exchange. I can ask it for code snippets, bash commands, searching like I might have done on the internet, grammar, spelling, and surface level Wikipedia like replies, and for roleplay. I’ve been playing around with writing science fiction too.

                I view Textgen models like the early days of the microprocessor right now. We’re at the Apple 1 kit phase right now. The LLM has a lot of potential, but the peripheral hardware and software that turned the chip into an useful computer are like the extra code used to tokenize and process the text prompt. All models are static, deterministic, and the craziest regex + math problem ever conceived. The real key is the standard code used to tokenize the prompt.

                The model has a maximum context token size, and this is all the input/output it can handle at once. Even with a RAG, this scope is limited. My 8×7B has a 32k context token size, but the Llama 3 8B is only 8k. Generally speaking, most of the time you can cut this number in half and that will be close to your maximum word count. All models work like this. Something like GPT-4 is running on enterprise class hardware and it has a total context of around 200k. There are other tricks that can be used in a more complex RAG like summation to distill down critical information, but you’ll likely find it challenging to do this level of complexity on a single 16-24 GB consumer grade GPU. Running a model like ChatGPT-4 requires somewhere around 200-400 GB from a GPU. It is generally double the “B” size of each model. I can only run the big models like a 8×7B or 70B because I use llama.cpp and can divide the processing between my CPU and GPU (12th gen i7 and 16 GB GPU) and I have 64GB of system memory to load the model initially. Even with this enthusiast class hardware, I’m only able to run these models in quantized form that others have loaded onto hugging face. I can’t train these models. The new Llama 3 8B is small enough for me to train and this is why I’m playing with it. Plus it is quite powerful for such a small model. Training is important if you want to dial in the scope to some specific niche. The model may already have this info, but training can make it more accessible. Smaller models have a lot of annoying “habits” that are not present in the larger models. Even with quantization, the larger models are not super fast at generation, especially if you need the entire text instead of the streaming output. It is more than enough to generate a stream faster than your reading pace. If you’re interested in complex processing where you’re going to be calling a few models to do various tasks like with a RAG, things start getting impracticality slow for a conversational pace on even the best enthusiast consumer grade hardware. Now if you can scratch the cash for a multi GPU setup and can find the supporting hardware, technically there is a $400 16 GB AMD GPU. So that could get you to ~96 GB for ~$3k, or double that, if you want to be really serious. Then you could get into training the heavy hitters and running them super fast.

                All the useful functional stuff is happening in the model loader code. Honestly, the real issue right now is that CPU’s have too small of a bus width between the L2 and L3 caches along with too small of an L1. The tensor table math bottlenecks hard in this area. Inside a GPU there is no memory management unit that only shows a small window of available memory to the processor. All the GPU memory is directly attached to the processing hardware for parallel operations. The CPU cache bus width is the underlying problem that must be addressed. This can be remedied somewhat by building the model for the specific computing hardware, but training a full model takes something like a month on 8×A100 GPU’s in a datacenter. Hardware from the bleeding edge moves very slowly as it is the most expensive commercial endeavor in all of human history. Generative AI has only been in the public sphere for a year now. The real solutions are likely at least 2 years away, and a true standard solution is likely 4-5 years out. The GPU is just a hacky patch of a temporary solution.

                That is the real scope of the situation and what you’ll run into if you fall down this rabbit hole like I have.

                • barsquid@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  7 months ago

                  This is pretty cool! Am I reading correctly that it isn’t so much about collecting a corpus of data for it to browse through as much as it is understanding how to do a specific query, maybe giving it a little context alongside that? It sounds like it might be worth refining a smaller model with some annotated information, but not really feasible to collect a huge corpus and have the model be able to pull from it?

      • iAvicenna@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 months ago

        I mean this data will most likely be more useful for surveillance/ads than for AI. Nowadays with AI they can make it look like they are only a couple steps away from a very intelligent personal assistant and therefore make it seem more plausible that they need your data to make that leap. But in reality I feel like it is not the level of AI that could leverage personalization, at least not in the context of personal assistance. In the context of behavioural mapping it is of course a super lucrative deal for them. There are already very useful tons of AI stuff that they can add which does not require personal behaviour info (at least not to this generality) and yet they don’t seem to spend as much effort into those and instead they are like “we need all your info stored somewhere for this very super (and mandatory) AI search assistant”. Big red flag.

      • DashboTreeFrog@discuss.online
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Yeah, maybe some kind of situation where you turn it on for “training time” with access to only specified files and systems on the computer, no internet access, etc. At the same time though, I wonder how much an AI could really streamline things. Would it just pre-load my frequent files and programs? Make suggestions or reminders on tasks? I don’t think we’re anywhere near the level where it could actually be doing work for me yet.

        Interesting possibilities, but I’m not sure how useful yet.

  • UntitledQuitting@reddthat.com
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    Sometimes I like sitting in my Unix-based ivory tower, but then I remember my daily driver uses macOS and that it’s only a matter of time before they employ something similar/worse.

    When the inevitable inevitably evits, the toughest choice for me will be fedora vs tumbleweed.

  • archchan@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    It’s not going to get better. I nuked 10 and switched to Linux permanently around the Windows 11 launch. My only regret is not switching sooner, like around Windows 8 times.