• 21 Posts
  • 1.11K Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • There’s zero MS in the stack on anything with SYNC4 and newer. Your salesperson is wrong. Even development is largely done on Ubuntu. SYNC 4 has two front ends, one’s Qt which has some Panasonic outsourcing baggage, the newer one is web based. The latter is what’s in the Mach-e. Since about 2017 all of this has moved in house. Ford hired the whole BlackBerry mobile R&D org in late 2016 - people, offices and everything. It’s had an honest-to-god software org since then.

    Your Flex probably had the older SYNC iteration that was MS developed. BTW I’m not sure if it was Windows based or whether it was QNX with MS devs creating the software stack on top of it.





  • Avid Amoeba@lemmy.catolinuxmemes@lemmy.worldEnjoy the moment
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    5 hours ago

    Valve is doing this? Not Android since 2008?

    Heck we know people don’t give a shit what’s under the covers since at least the switch between Windows 98 and 2000/XP, the latter being a very different OS. It could have been BSD or Linux and people wouldn’t have bat an eye if the start menu looked the same and Word, Corel Draw, Photoshop and AutoCAD worked.





  • Unless you need RAID 5/6, which doesn’t work well on btrfs

    Yes. Because they’re already using some sort of parity RAID so I assume they’d use RAID in ZFS/Btrfs and as you said, that’s not an option for Btrfs. So LVMRAID + Btrfs is the alternative. LVMRAID because it’s simpler to use than mdraid + LVM and the implementation is still mdraid under the covers.


  • Avid Amoeba@lemmy.catoSelfhosted@lemmy.worldAnyone running ZFS?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    And you probably know that sync writes will shred NAND while async writes are not that bad.

    This doesn’t make sense. SSD controllers have been able to handle any write amplification under any load since SandForce 2.

    Also most of the argument around speed doesn’t make sense other than DC-grade SSDs being expected to be faster in sustained random loads. But we know how fast consumer SSDs are. We know their sequential and random performance, including sustained performance - under constant load. There are plenty benchmarks out there for most popular models. They’ll be as fast as those benchmarks on average. If that’s enough for the person’s use case, it’s enough. And they’ll handle as many TB of writes as advertised and the amount of writes can be monitored through SMART.

    And why would ZFS be any different than any other similar FS/storage system in regards to random writes? I’m not aware of ZFS generating more IO than needed. If that were the case, it would manifest in lower performance compared to other similar systems. When in fact ZFS is often faster. I think SSD performance characteristics are independent from ZFS.

    Also OP is talking about HDDs, so not even sure where the ZFS on SSDs discussion is coming from.



  • Avid Amoeba@lemmy.catoSelfhosted@lemmy.worldAnyone running ZFS?
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago

    Not sure where you’re getting that. Been running ZFS for 5 years now on bottom of the barrel consumer drives - shucked drives and old drives. I have used 7 shucked drives total. One has died during a physical move. The remaining 6 are still in use in my primary server. Oh and the speed is superb. The current RAIDz2 composed of the shucked 6 and 2 IronWolfs does 1.3GB/s sequential reads and write IOPS at 4K in the thousands. Oh and this is all happening on USB in 2x 4-bay USB DAS enclosures.


  • Avid Amoeba@lemmy.catoSelfhosted@lemmy.worldAnyone running ZFS?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 days ago

    That doesn’t sound right. Also random writes don’t kill SSDs. Total writes do and you can see how much has been written to an SSD in its SMART values. I’ve used SSDs for swap memory for years without any breaking. Heavily used swap for running VMs and software builds. Their total bytes written counters were increasing steadily but haven’t reached the limit and haven’t died despite the sustained random writes load. One was an Intel MacBook onboard SSD. Another was a random Toshiba OEM NVMe. Another was a Samsung OEM NVMe.