• 0 Posts
  • 82 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle
  • You don’t need to run it on a pi. In fact, I’d actually argue against it; A pi will be underpowered if you’re ever needing to transcode anything. Transcoding is what Plex/Jellyfin does if your watching device can’t natively play the video. Maybe you have a 4k video, but you’re playing it on a 1080p screen. That video will need to be transcoded from 4k into 1080p for the screen to be able to display it. Or maybe the file is encoded using ACC (a fairly recent encoding method) which isn’t widely supported by older devices. This often happens with things like smart TVs (which often don’t support modern encoding and need to be transcoded even if the resolution is correct.)

    Basically, if you’re 100% positive that every device you’re watching it on will never need transcoding, then a pi is acceptable. But for anything else, I’d recommend a small PC instead. You can even use an old PC if you have one laying around.

    Or if you want to use a new machine, maybe something like an HP Elitedesk. They’re basically what you see in every single cubicle in every single office building. They’re extremely popular in corporate settings, which means there are a ton of used/refurbished systems available for cheap, because IT destroys the drives and sends the rest to refurb when they upgrade their fleet of PCs. So for the refurb you’re basically just paying the cost of an SSD they added in (to replace the one IT pulled out), plus whatever labor is associated with dusting it out and checking the connections to make sure they all work. You can pick up a modern one for like $250 on Amazon (or your preferred electronics store).

    Worth noting that the elitedesk generations are marked by a G-number, so google the model (like an EliteDesk G9, G7, etc) to see what kind of processor it has; Avoid anything with an intel 13th or 14th generation CPU, (they have major reliability issues) and check with Plex/Jellyfin’s CPU requirements list to see if it supports hardware accelerated transcoding. For Intel chips, look for QuickSync support.

    For storage, I’d recommend running a NAS with however many hard drives you can afford, and one that has extra ports for future expandability. Some NAS systems support Plex and/or Jellyfin directly, but the requirements for full support are tricky and you’ll almost always have better luck just running a dedicated PC for Plex. Then for playing, one of two things will happen. Either the device is capable of directly playing the file, or it will need to be transcoded. If it’s directly playing, the plex server basically just points the player to the NAS, and the player handles the rest. If it’s transcoding, the PC will access the NAS, then stream it to the player.

    As for deciding on Plex vs Jellyfin, that’s really a matter of personal preference. If you’re using Plex, I’d highly suggest a PlexPass sub/lifetime purchase; Wait until Black Friday, because they historically do a (~25% off) discount on their lifetime pass. Plex is definitely easier to set up, especially if you plan on streaming outside of your LAN.

    Jellyfin currently struggles from a lack of native app support; Lots of smart TVs don’t have a native Jellyfin app, for instance. But some people have issues and complaints (many of them justified!) with Plex, so if the FOSS sounds appealing, then consider Jellyfin instead. Jellyfin is also rapidly being developed, and many people expect it to have feature parity with Plex within a few years.

    And if you’re having trouble deciding, you can actually set up both (they can run in tandem on the same machine) and then see which one you prefer.

    And the nice part about using a mini PC is that you can also use it for more than just Plex/Jellyfin. I have the *arr suite running on mine, alongside a Factorio server, a Palworld server, and a few other things.



  • If you’re referring to the wavy pattern along the cutting edge, that’s not from the folding process. The hamon is added to the blade during the quenching process, by adding clay to the steel. The clay causes the covered steel to heat differently than the uncovered steel. That differential heating is what is visible as the hamon.

    It’s largely decorative, but does have function as it determines what part of the blade can be sharpened to an edge.


  • Yeah, Japanese steel wasn’t great, but they were working with what they had available at the time. Katanas were basically made out of iron dust, which had been melted into slag by filtering through charcoal. The resulting chunks of steel were basically straight up slag, not nice even ingots. So the steel they got was actually extremely high carbon in places, but that also meant it was brittle as hell, because those carbon pockets were prone to shattering.

    So the folding was invented, to even out the steel’s carbon content (just like how a Damascus steel blade has visible stripes, Japanese steel had invisible stripes of high and low carbon steel) and to lower the carbon content overall; Every time you heat for another fold, you’re evaporating some carbon. So the folding process took the steel from extremely high carbon pockets to a more evenly distributed carbon content.

    Now that modern steel processing exists, the only real reason to stick to the folding method is tradition. There’s no need to fold modern steel ingots because they’re already homogenous and can be produced at whatever carbon level you want.



  • This looks like it was a timing analysis attack. Basically, they’re trying to figure out which user did something specific. They match the timing of the event with the traffic from the user, and now they know which user did the thing.

    It can be fuzzed by streaming something at the same time, because now your traffic is way harder to time analyze when you have a semi-constant stream of data running. But streaming something over Tor is an exercise in patience, (and it’s not something the typical user will just always have running in the background) so timing analysis attacks are gaining popularity.


  • That’s more on the OS than the text protocol. The protocol doesn’t just hold a text in the ether until it’s time for delivery. A scheduled text is you telling the phone “hey, wait to send this message until it’s time.” Then your phone sends it at the proper time.

    iOS still doesn’t have built in text scheduling. There are workarounds, (like using the Shortcuts app to build a “send this text” automation that runs at a specific time), but that’s not the same thing as native support.


  • It’s more about the lack of iMessage features. Things like editing, unsend, text effects, etc are absent in regular texts. If everyone is on iMessage, everyone can use those enhanced features. They’re apparently pretty popular in group chats, but even a single android user will drag the entire conversation into regular text messages instead. So lots of iPhone users (especially the younger gen Z and alpha) started complaining whenever someone had an android, or even outright bullying them for it.

    And for android users, texting with an iPhone user is a horrible experience; Images are horribly compressed, videos are severely limited in file size and compressed, group texts need to be opened as an attachment to be read, etc… All because iOS refused to use the more modern RCS texting protocols.




  • There is also the hilariously misguided belief that good coders do not produce bugs so there’s no need for debugging.

    Yeah, fuck this specifically. I’d rather have a good troubleshooter. I work in live events; I don’t care if an audio technician can run a concert and have it sounding wonderful under ideal conditions. I care if they can salvage a concert after the entire fucking rig stops working 5 minutes before the show starts. I judge techs almost solely on their ability to troubleshoot.

    Anyone can run a system that is already built, but a truly good technician can identify where a problem is and work to fix it. I’ve seen too many “good” technicians freeze up and panic at the first sign of trouble, which really just tells me they’re not as good as they say. When you have a show starting in 10 minutes and you have no audio, you can’t waste time with panic.





  • At least on iOS, it takes it a step farther and tells you specifically when an app is accessing your location, microphone, camera, etc… It even delineates when it’s in the foreground or background. For instance, if I check my weather app, I get this symbol in the upper corner:

    The circled arrow means it is actively accessing my location. And if I close the app, it gives me this instead:

    The uncircled arrow means my location was accessed in the foreground recently. And if it happens entirely in the background, (like maybe Google has accessed my location to check travel time for an upcoming calendar event,) then the arrow will be an outline instead of being filled in.

    The same basic rules apply for camera and mic access. If it accesses my mic, I get an orange dot. If it accesses my camera, I get a green dot.






  • That’s because they’ve been pushing the iPad as a sort of Mac Lite, but they can’t do that unless you can plug peripherals or a thumb drive into it. You can 100% plug a USB-C laptop dock into an iPad, and it’ll work. You can even use a mouse with it if you really want to.

    But they wanted to keep Lightning around as long as possible, because they made a commission on every single lighting cable that was sold; Companies had to license the rights to use the connector, and had to pay Apple for every one they used. That’s why Lightning cables were always a few bucks more expensive than a comparable USB-C cable. That extra few bucks was going straight into Apple’s pocket. It was a huge source of passive income for the company, which they were reluctant to let go of.