

There can be theoretical audit or blame issues , since you’re not “paying” then how does the company pass the buck (SLA contracts) if something fucks up with LE.
There can be theoretical audit or blame issues , since you’re not “paying” then how does the company pass the buck (SLA contracts) if something fucks up with LE.
Ironically the shortening of cert lengths has pushed me to automated systems and away from the traditional paid trust providers.
I used to roll a 1-year cert for my CDN, and manually buy renewals and go through the process of signing and uploading the new ones, it wasn’t particularly onerous, but then they moved to I think either 3 or 6 months max signing, which was the point where I just automated it with Let’s Encrypt.
I’m in general not a fan of how we do root of trust on the web, I much prefer had DANE caught on, where I can pin a cert at the DNS level that is secured with DNSSEC and is trusted through IANA and the root zone.
IP law needs overhauling, but these are the last people (aside from Disney et al) I’d trust to draft the new ones.
The US manages to store 1.5B pounds of cheese it doesn’t do anything with, I think China can handle constructing some warehouse to hold what it digs up from the ground.
if not x then … end
is very common in Lua for similar purposes, very rarely do you see hard nil comparisons or calls to typeof
(last time I did was for a serializer).
If only, this is “modern” PhysX, we’d need the source to the original Ageia PhysX 2.X branch to fix it properly.
The amount of stupid AI scraping behavior I see even on my small websites is ridiculous, they’ll endlessly pound identical pages as fast as possible over an entire week, apparently not even checking if the contents changed. Probably some vibe coded shit that barely functions.
Man this reminds me of the lockers we had in middle school that used dial locks, cheap masterlock jobbies that despite having notches between the major numbers, just being within 2 of the actual number would register.
Plus it felt like they’d slip internally so if you dialed too quickly (because class starts in 3 minutes at the other end of the building) you’d have to start all over.
Yeah, electric motors are what I notice the most. Be it on washers/dryers, garbage disposals (which range from 1/3, 1/2, 3/4, 1HP) and more.
Probably a mix of Z systems, that stuff goes back 20-odd years, and even then older code can still run on new Z systems which is something IBM brags about.
Mainframes aren’t old they’re just niche technology, and that includes enterprise Java software.
Uh, Java is specifically supported by IBM in the Power and Z ISA, and they have both their own distribution, and guides for writing Java programs for mainframes in particular.
This shouldn’t be a surprise, because after Cobol, Java is the most enterprise language that has ever enterprised.
Reddit is becoming such a shit hole anyway, the site barely functions on mobile browser now, half the time it has API errors or fails to load.
Further hampered by the Steam “discussions” that are an incredibly unmoderated cesspit.
Alternative roots are an interesting concept, but really people just need good alternatives recursives.
Mostly we need advanced packaging built out stateside, all the most advanced SoCs have to go elsewhere to be built into their final configuration.
Don’t worry, the new strategy is to string a company along with talks of a buyout, then when their cash runs out and they declare bankruptcy, to buy all the assets on fire sale.
If they were a small or free service I wouldn’t have much issue, but they do charge, I don’t think it’s too much to ask that they at least attempt to scrape the wider web.
Building their own database seems the prudent thing long-term, I don’t doubt they could shore up coverage over Bing. They don’t have to replace the other indexes wholesale, just supplement it.
They have smallweb and news indexing, but other than that AFAICT they rely completely on other providers. Which is a shame, Google allows submitting sites for indexing and notifies if they can’t.
Running a scraper doesn’t need to cover everything since they have access to other indexes, but they really should be developing that ability instead of relying on Bing and other providers to provide good results, or results at all.
Small web always returns 0 results for anything that isn’t extremely broad, unfortunately.
I think it’s “the algorithm”, people basically just want to be force-fed “content” – look how successful TikTok is, largely because it has an algorithm that very quickly narrows down user habits and provides endless distraction.
Mastodon and fediverse alternatives by comparison have very simple feeds and ways to surface content, it simply doesn’t “hook” people the same way, and that’s competition.
On one hand we should probably be doing away with “the algorithm” for reasons not enumerated here for brevity, but on the other hand maybe the fediverse should build something to accommodate this demand, otherwise the non-fedi sites will.