![](https://lemmy.ca/pictrs/image/d451c051-3cc2-4b9b-ae35-5417d1aceb17.jpeg)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
I think it was a general “when you leave Canada” policy.
I think it was a general “when you leave Canada” policy.
I guess Chromium isn’t fully BSD. This could be the reason. Although I’d think reimplementing the non-BSD bits in Chromium would be less work than reimplementing all the bits, including the BSD ones.
Why are open source software monocultures bad? The vast majority of non-Windows OSes are Linux based. Teams who don’t like certain decisions of the mainline Linux team maintain their forks with the needed changes.
Manifest V3 is a great example of this. You can only backport for so long, especially when upstream is being adversarial to your changes. We need an unaffiliated engine that corrects the mistakes we made with KHTML/Webkit.
And we could get a functional one today by forking Chromium and never accepting a single upstream patch thereafter. I find it really hard to believe that starting a browser engine from scratch would require less labor. This is why I’m looking for an alternative motive. Someone mentioned licensing.
Perhaps some folks just want to do more work to write a new browser engine. After all Linus did just that, instead of forking the BSD kernel.
Any intuition on why we’d expect opening the same page on a newly implemented browser engine that implements all equivalent standards and functions will consume less resources?
I do not understand the urge to start from scratch instead of forking an existing, mature codebase. This is typically a rookie instinct, but they aren’t rookie so there’s perhaps an alternative motive of some sort.
You can get the binary from the project’s website. Still not suggesting to f around with it.
If you have root you could theoretically add Memtest86+ to the boot order. There’s tools that allow adding boot entries in EFI. You could probably place a Memtest86+ binary in your EFI partition and register it with the EFI firmware. But I’m not suggesting to do it since you could make the machine unbootable and the problem might be on the storage path. I’m just thinking of should be possible.
Most machines I owned that had kernel panics had either an NVIDIA or an AMD GPU graphics adapter, along with bad memory.
FTFY
Even as far back as 2010 the corpo I worked for had an official travel protocol that dictated backing up Blackberries, factory resetting them, crossing the border, then restoring them from the cloud. That was for crossing any border.
As many have pointed out, price wise it’s not competitive. But more than that, the main feature of the Pi is its software support. I buy a Pi not because it’s got the top specs but because I know I can load a rock solid OS with security support and I won’t have to think about it. This is a problem for every Pi competitor.
Perhaps to people who are used to watching ad infested cable and don’t pay for ad-free streaming. So it’s not that ads aren’t detracting from the experience but that some folks are used to it. Getting those folks is growth. Number go up.
It isn’t? You might be looking at a different market.
If you were actually able to set it up via ssh,
I never said that.
I’m on Ubiquity’s payroll, definitely. I’m expecting a check in the mail any day now.
Oh for sure. I ran them without a controller for years. I only set it up to do a wireless bridge.
For home, second hand Ubiquity might be. You can get flying saucers taken off from corpo upgrades for dirt cheap.
I was able to SSH into mine and I’m running their Docker container with a Unifi Controller instead of a cloud key.
Crashes aren’t normal even in Windows. Rare crashes mean a hardware problem 99.7% of the time. Typically RAM as others have pointed out. The only way to figure that out is 4 passes of Memtest86+ without red. Yes 4 because the the first pass is a short one made to spot obviously bad RAM quickly. Less bad RAM might need more. I’ve had a case of 4 sticks that each pass on its own. Every two passed on their own. All 4 failed on the third or fourth pass. And if you think I tested for shits and giggles, I did not. I was see checksum errors on my ZFS pool every other day. No crashes. Nevertheless, if it wasn’t for ZFS I’d have corrupted files all over my archive.
Just signed up after they announced the non-profit and migrated all my mail. So far so good.
I wouldn’t go from Google to another for-profit though. I know how it ends.
Because Ubuntu LTS works very reliably and because there’s a huge body of information and large swathes of people who can help on the Internet, and because every project and vendor tests and releases their stuff for Ubuntu/Debian and has documentation for it.
Despite the hate you see around these shores, Ubuntu LTS is among the best if not the best beginner distro. Importantly it scales to any other proficiency level. The skill and knowledge acquired while learning Ubuntu transfers to Debian as well as working professionally with either of them.
Also, with the fuckery RedHat pulls lately, it’s a disservice to new users to get them to learn the RedHat ecosystem, unless they plan or need to use it professionally. If I had to bet, I’d bet that the RH ecosystem would be all but deserted by volunteers in the years to come. I bet that as we speak a whole lotta folks donating their time are coming to the conclusion that Debian was right and are abandoning ship.
That actually makes the most sense. So similar to how Linux was started.