• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle



  • Correct. Micro-ATX is the smaller version of the larger ATX and still larger EATX (extended atx). Your old case probably fits micro atx if it’s not OEM. You can populate it with a mb, cpu, ram, ssd, and power supply (don’t need more than 500w for your use case) and eventually move to a nicer case like that Node if/when you fall in love with the hobby. My Rpis are collecting dust since switching to a low power server.

    It’s a whole different experience when general advice applies to your hardware vs the Rpi ecosystem. Many more options. In 2024, ATX offers no real benefit over the smaller form factor beyond better heat management for high power builds with spaced out components.

    And a correction: node 304 supports 6HD, the 804 supports 8



  • My uncle bought a used car built in communist east Germany. He always emphasized how it was built like a tank to last. Capitalism is great and all, but it promotes waste. Companies have an incentive to make products that fail and need to be repurchased. Planned obsolescence is fine if it was only about people craving something better. As it stands, it’s more of a forced switch with breakable parts.







  • Most people here don’t understand what this is saying.

    We’ve had “pure” human generated data, verifiably so since LLMs and ImageGen didn’t exist. Any bot generated data was easily filterable due to lack of sophistication.

    ChatGPT and SD3 enter the chat, generate nearly indistinguishable data from humans, but with a few errors here and there. These errors while few, are spectacular and make no sense to the training data.

    2 years later, the internet is saturated with generated content. The old datasets are like gold now, since non of the new data is verifiably human.

    This matters when you’ve played with local machine learning and understand how these machines “think”. If you feed an AI generated set to an AI as training data, it learns the mistakes as well as the data. Every generation it’s like mutations form until eventually it just produces garbage.

    Training models on generated sets slowly by surely fail without a human touch. Scale this concept to the net fractionally. When 50% of your dataset is machine generated, 50% of your new model trained on it will begin to deteriorate. Do this long enough and that 50% becomes 60 to 70 and beyond.

    Human creativity and thought have yet to be replicated. These models have no human ability to be discerning or sleep to recover errors. They simple learn imperfectly and generate new less perfect data in a digestible form.