The most-recent release of lemmy dicked up outbound federation pretty badly on the instance I use.
The most-recent release of lemmy dicked up outbound federation pretty badly on the instance I use.
While it might work in the OS, setting the OS up may be a pain (the installer may or may not work like that) and I strongly suspect that the BIOS can’t handle it.
I suspect that an easier route would be to use a cheap, maybe older, low-end graphics card for the video output and then using DRI_PRIME with that with the other graphics card.
Emacs+org-mode
I mean, you can put a powered USB hub on that and get more ports if you want them.
Or get a USB drive enclosure that can take multiple drives.
deleted by creator
I find that resellerratings.com is a decent first pass on an online retailer that you’ve never heard of before.
PrimeBuy:
https://www.resellerratings.com/store/Mega_Solutions_LLC
Insight:
You can set this sort of “redundancy with different-size drives without wasting a bunch of space” thing up at the block device level – I understand that Synology’s “Hybrid RAID” is a Linux system doing that. But you’ve got to be careful doing that; configure it wrong and you won’t have redundancy.
I don’t know, somewhat-surprisingly, of a software package that aims to manage a collection of disks to do this configuration at a high level, slicing up a collection of drives into smaller block devices and adding and removing disks and migrating data and such while providing guarantees that data has at least N drive redundancy.
That being said, even in such a configuration, you can only do so much. If you have one drive that’s 10TB and the rest of all your drives intended for redundant storage sum to 3TB – which is what you have – you can’t have a configuration that can handle failure of your 10TB drive and can store more than 3TB of data with redundancy, no matter how you slice and dice things. You’re going to have to waste 7TB of space or store data without redundancy.
What’s best to do is probably going to depend on how much you want to spend.
I really want USB-C power.
You can get a C-to-micro adapter if that will work for your use case and you have a micro-USB router that you like.
In an overzealous attempt to prevent HTML injection
I think that they were actually getting hit with attacks.
googles
Yeah:
https://lemmy.world/post/3986993
Yeah I think this was hastily done to prevent the XSS injection attacks that were happening IIRC. They implemented encoding for content, but looks like they never got around to fully decoding it.
Ah, I looked at Tortoise, but I do not have an nVidia GPU, so I couldn’t try it.
I use it on an AMD GPU.
EDIT: Wait, let me make sure. I was using an Nvidia GPU for a while and switched to AMD.
EDIT2: Oh, yeah, it uses transformers, and that doesn’t work on rocm presently, IIRC.
Festival – not cutting edge – will definitely be better than your Amiga, and can handle long text. Last time I set it up, IIRC I wanted some voices generated by Tokyo University or something, which took some setting up. It’ll probably be packaged in your Linux distro.
You can listen to a demo here.
https://www.cstr.ed.ac.uk/projects/festival/onlinedemo.html
It’s not LLM-based.
For short snippets, offline, one can use Tortoise TTS – which is LLM based. But it’s slow and can only generate clips of a limited length. Whether it’s reasonable for you will depend a lot on your application. It will let one clone – or make a voice sounding more-or-less similar – a voice using some sound samples from them speaking.
https://github.com/neonbjb/tortoise-tts
Examples at:
https://nonint.com/static/tortoise_v2_examples.html
I haven’t used Google’s, but I’d assume, given that Google is paying people to work on it full time, that whatever they’ve done probably sounds nicer. But, then not open source, so…shrugs
It sounds like this particular YouTube channel may take money to promote products – it has a bit on “contacting the creator about business opportunities”. I suppose that that would be independent of ad rates, though not of audience size.
I don’t know whether it’s important to do a full backup for performance, but you’d need to do that if you wanted to remove old backups. It looks like the term for a full backup that reuses already pushed-over data is a “synthetic full” backup, and duplicity can’t do those – when it does a full, it pushes all the data over again.
I have never used it, but Borg Backup does appear to support this, if you wanted an alternate that can do this.
EDIT: However, Borg requires code that runs on the remote end, which may not be acceptable for some; you can’t just aim it at a fileserver. duplicity can do this.
I have also never used it, but duplicati looks to my quick glance to be able to work without code running on the remote end and also to do synthetic full backups.
I don’t do offsite backup, but if your backup system supports it, you could physically take your backup drive to some location with a lot of bandwidth to toss the initial full backup up there.
I haven’t used it, but restic was mentioned in an earlier backup discussion. It appears to be able to use rclone, which can talk to object storage.
I dunno. The IPv4 address space is getting pretty tight, and aside from rejiggering existing inefficient allocations, there’s not a lot you can do beyond NAT.
In the US, we had it pretty good for a long time, because we had a rather disproportionate chunk of the IPv4 address space – Ford, MIT, and Apple alone each had their own Class A netblock, about half a percent of the IPv4 address space each, for example.
But things have steadily gotten tighter as more and more of the world uses the Internet more and more.
https://whatismyipaddress.com/ipv6-ready
As expected, the ISPs are no longer receiving new allotments or allocations of public IPv4 addresses from the American Registry for Internet Numbers (ARIN). Some have managed to continue to provide new IPv4 addresses by reallocating some of the addresses they had been assigned in the past but perhaps had never passed on to customers. This buys them a little more time while they scramble to roll out and support IPv6 addresses.
Like, there’s real scarcity of the resource. It doesn’t require the scarcity to be artificially-induced.
My ISP used to let one get a /29 IPv4 block for residential users, though they stopped that years ago. Always have had a way to get publicly-facing IPv6 addresses, though.
End of the day, the real fix is to get the world on IPv6.
While I use ssh tunneling to access systems on a temporary basis, usually http, some caveats:
I don’t know of a daemon to set up locally that will re-establish tunnels on power loss and the like. Not technically-difficult, but something one probably wants if this is going to be how he’s gonna get at the system long-term rather than “I just need one-off access”.
One other downside – the service that the user here is aiming to expose is apparently ssh. For me – reaching an http server – wrapping the connection for remote use is desirable. For him, it probably isn’t, as there’ll be two layers of encryption. Not the end of the world, but it’s a hit. You do want encryption in the outer protocol at least insofar as you need it to protect authentication to the VPS anyway.
This kind of drives me bonkers too – a lot of video content that would do fine in non-video form is in video. Ditto for podcasts, though there I get that there are people who want to listen to a podcast in the background while driving or something.
I think that a lot of people – no idea if this is the case here – post content to YouTube because it’s got a low barrier to monetize a channel. I think that there’s a very valid argument that there should be a written-media equivalent to YouTube. Like, there are blogging services, but AFAIK there isn’t one that does that sort of monetization.
I’m not sure why.
Maybe it’s that hosting video is bandwidth-expensive, so it’s harder for another service to just rip the content or something.
I don’t care about XMPP as a protocol versus some other messaging protocol much, but I care a fair bit about the wdespread adoption of federated XMPP
I don’t quite understand what this means, could you elaborate?
For me, what is interesting about XMPP is that – if federated – it permits for the kind of open environment that email has traditionally has. An open market, where one can choose a provider, and there aren’t walled gardens.
It’s not that I’ve sat down and reviewed the different messaging protocols and decided that there are fundamental, unfixable issues in other messaging protocols. That is, it’s not specifically XMPP that’s valuable, but the fact that XMPP (can be) deployed in a federated matter, like email.
You could hypothetically even add federation to other protocols, or gateway among them. Gatewaying is doable if various providers are willing. I’ve run bitlbee on my Linux system before; that’s a server that locally acts like an IRC server, permitting the use of IRC clients, but gateways messages to a number of other protocols and services; gatewaying writ small. From my standpoint, that’d also solve the problem, if all the messaging providers were willing to gateway to other systems. Early-on, when the walled email gardens opened up to Internet email – Compuserve, AOL, etc – they did gateway to the Internet.
And you can layer protocols on top of that, like OTR, to provide communication security.
I think that the problem isn’t “XMPP as a protocol” is interesting, because if there was one big messaging provider and it internally used non-federated XMPP, we wouldn’t really notice a difference.
And it doesn’t even, honestly, require use of a single, explicitliy-federated protocol. That’s useful for things like addressing handling routing – it creates a single convention, that username@xmpp-host
is a way to reach a user. If you used gateways, you might have user@xmpp-host.external@traditionally-nonfederated-messaging-system
. We had stuff like UUCP that did that some decades back, and you could build some sort of system for looking up a route to a user. Hell, I’m not even convinced that XMPP has that necessarily right – maybe it’d be better to be user@pubkey-signature
, to permit for cross-host account portability. We could have ICQ and Matrix and Signal and SMS all co-existing with gateways, and that’d work okay from my standpoint.
Federated XMPP is designed to work like email, to have global interoperability, but even a shift to XMPP doesn’t guarantee that that happens – or that, over the long term, that’s where we wind up, even if we start there.
I don’t think that the issue is really one of protocol, but of interoperability. Moving to XMPP won’t (necessarily) solve it, though I wouldn’t be sad to see things move in that direction. And it’s a problem that can be solved without moving to XMPP; we could even do it with multiple messaging protocols.
I think that the issue is one of business incentives. If you have a good chance to be a walled garden, you don’t want a competitive market, because competition kills your ability to make money. If you are big enough, you can leverage network effect to gain an advantage – your users provide value to the network, and you control access the outside world can have to your users – and lock-in. You are disincentivized from decoupling from other services in that you want your users to have access to those other users, but as one provider becomes a relatively-larger share of the network, their incentive to leverage access to their userbase grows and the disincentive of cutting off the outside world decreases; they tend more towards trying to be the new walled garden.
That is, I think that the core problem is one of incentives surrounding interoperability among providers. As a user, I would like to have a competitive market. As a provider – at least one with a chance to be The One Walled Garden – I don’t want a competitive market. Interoperability makes for a competitive market; it tends to commoditize a provider’s service insofar as they can’t leverage access to their users any more.
That is a valid concern, though the point of the article is to try and convince people why it won’t happen like it did with Google or might with Meta for structural reasons (rather than “oh but we’re different” reasons).
I’m not really trying to beat up on Snikket here in particular. They may be just fine (or are today). I’m just trying to provide a broader take on the problem that the author is talking about – the walled garden versus open system problem.
Not really what you’re asking for, but there are enclosed racks with sound isolation. Though they are a bit pricey, to my way of thinking.