Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

news.itsfoss.com

trevor , to Technology in Apple Decides to Block Open-Source Emulator App for iOS

This is just me being pedantic, but I keep seeing this mistake when UTM is mentioned (specifically in headlines), so I feel like I have to say something:

UTM is not an emulator. It is virtual machine software that uses an emulator (QEMU) to virtualize operating systems.

The difference: emulators emulate hardware. On which, the virtualized operating systems run.

Gamers_mate , (edited ) to Technology in Apple Decides to Block Open-Source Emulator App for iOS

"The developers of UTM mention that Apple even went the extra step, and disallowed the publishing of UTM SE on third-party marketplaces."

Apple do realize third party marketplaces can have their own rules because they are not affiliated with them right?

hamsterkill ,

I believe Apple still has the power to block third party store apps based on signature. It's a security thing to be able clean malware.

CreativeTensors ,
@CreativeTensors@beehaw.org avatar

I'm sure the EU will love that bit of malicious compliance that apple have shown they will use to remove non-malware that they just don't approve of using the same mechanism...

christophski , to Technology in Apple Decides to Block Open-Source Emulator App for iOS

Apple's constant anti-interoperability stance is the core reason I do not and will not own their products

scrubbles , to Technology in Apple Decides to Block Open-Source Emulator App for iOS
@scrubbles@poptalk.scrubbles.tech avatar

Big companies will constantly work against open standards.

thingsiplay ,
@thingsiplay@beehaw.org avatar

Yes, but this case here is not a problem of Open Standards. It's misusing the power to exclude certain type of applications from the eco system. That can even happen with companies following open standards, they could still misuse their power and position to exclude what they want to, according to their policy.

Moonrise2473 , to Fediverse in Is Mastodon's Link-Previewing Overloading Servers ?

If I understand right that means link previews are requested every single time an user sees it? The instance should request it once a week, cache it and serve that to users

ReveredOxygen ,
@ReveredOxygen@sh.itjust.works avatar

I believe instances generate the preview as soon as it's federated. The problem is that if you have many followers, each of their instances will try to generate a preview at the same time

delirious_owl , to Fediverse in Is Mastodon's Link-Previewing Overloading Servers ?
@delirious_owl@discuss.online avatar

Just fucking cache.

If a GET request is breaking your server, you're doing something horribly wrong.

uis , (edited )
@uis@lemm.ee avatar

It's about amplification attack. No matter how well you cache, you still will send replies.

delirious_owl ,
@delirious_owl@discuss.online avatar

Doesn't apply to GET

uis ,
@uis@lemm.ee avatar

Next stage: doesn't apply to DNS

For context DNS amplification factor is about 150.

rimu , to Fediverse in Is Mastodon's Link-Previewing Overloading Servers ?
@rimu@piefed.social avatar

In the comments on the article people have debugged their cloudflare/caching configuration for them and told them what they're doing wrong.

helenslunch , to Fediverse in Is Mastodon's Link-Previewing Overloading Servers ?
@helenslunch@feddit.nl avatar

Only ones with 2GB RAM

MentalEdge , to Fediverse in Is Mastodon's Link-Previewing Overloading Servers ?
@MentalEdge@sopuli.xyz avatar

Foss project: has 100 open issues

A year passes

Foss project: 50 issues got resolved, 50 new ones have been opened in the meantime

Why hasn't this giant project fixed a single bug?

0x1C3B00DA ,
@0x1C3B00DA@fedia.io avatar

This issue has been noted since mastodon was initially release > 7 years ago. It has also been filed multiple times over the years, indicating that previous small "fixes" for it haven't fully fixed the issue.

dsemy ,

I'm sure an affected website could have paid a web developer to find a solution to this issue in the past 7 years if it was that important to them.

veroxii ,

Or probably pay an extra $5 for the better hosting plan.

Die4Ever , (edited )
@Die4Ever@programming.dev avatar

Or use Cloudflare (properly)

pedroapero OP ,

They say they do in the article.

Die4Ever ,
@Die4Ever@programming.dev avatar

Then they aren't using it properly

0x1C3B00DA ,
@0x1C3B00DA@fedia.io avatar

People have submitted various fixes but the lead developer blocks them. Expecting owners of small personal websites to pay to fix bugs of any random software that hits their site is ridiculous. This is mastodon's fault and they should fix it. As long as the web has been around, the expected behavior has been for a software team to prioritize bugs that affect other sites.

dsemy ,

If they don't want to pay to fix it, they can just block the user agent (or just fix their website, this issue is affecting them so much mainly because they don't cache).

Relying on the competence of unaffiliated developers is not a good way to run a business.

0x1C3B00DA ,
@0x1C3B00DA@fedia.io avatar

Relying on the competence of unaffiliated developers is not a good way to run a business.

This affects any site that's posted on the fediverse, including small personal sites. Some of these small sites are for people who didn't set the site up themselves and don't know how or can't block a user agent. Mastodon letting a bug like this languish when it affects the small independent parts of the web that mastodon is supposed to be in favor of is directly antithetical to its mission.

dsemy ,

The reason (IMO) this has languished as much as it has, is that most sites handle this fine; though I agree that it should have been fixed by now.

dsemy , to Fediverse in Is Mastodon's Link-Previewing Overloading Servers ?

They also state their opinion that the issue “should have been prioritized for a faster fix… Don’t you think as a community-powered, open-source project, it should be possible to attend to a long-standing bug, as serious as this one?”

It's crazy how every single entity who has any issue with any free software project always seems to assume their needs should be prioritized.

delirious_owl ,
@delirious_owl@discuss.online avatar

Well, the users collectively should dictate the priorities.

dsemy ,

Why should they? The users of a free software project aren't entitled to anything.

If users want to dictate priorities they should become developers, and if they can't/won't at least try to support them financially.

delirious_owl ,
@delirious_owl@discuss.online avatar

Because democracy

jmcs , to Fediverse in Is Mastodon's Link-Previewing Overloading Servers ?

There's no reason why 114MB of static content over 5 minutes should be an issue for a public facing website. Hell, I probably could serve that and the images with a Raspberry Pi over my home Internet and still have bandwidth to spare.

I think they are throwing stones at the wrong glass house/software stack.

Sleepkever ,

It is not, but a write amplification of 36704:1 is one hell of an exploitable surface.

With that same Raspberry Pi and a single 1gbit connection you could also do 333333 post requests of 3 KB in a single second made on fake accounts with preferably a fake follower on a lot of fediverse instances. That would result in those fediverse servers theoretically requesting 333333 * 114MB = ~38Gigabyte/s. At least for as long as you can keep posting new posts for a few minutes and the servers hosting still have bandwidth. DDosing with a 'botnet' of fediverse servers/accounts made easy!

I'm actually surprised it hasn't been tried yet now that I think about it...

algernon ,
@algernon@lemmy.ml avatar

That would result in those fediverse servers theoretically requesting 333333 * 114MB = ~38Gigabyte/s.

On the other hand, if the site linked would not serve garbage, and would fit like 1Mb like a normal site, then this would be only ~325mb/s, and while that's still high, it's not the end of the world. If it's a site that actually puts effort into being optimized, and a request fits in ~300kb (still a lot, in my book, for what is essentially a preview, with only tiny parts of the actual content loaded), then we're looking at 95mb/s.

If said site puts effort into making their previews reasonable, and serve ~30kb, then that's 9mb/s. It's 3190 in the Year of Our Lady Discord. A potato can serve that.

MinekPo1 ,
@MinekPo1@lemmygrad.ml avatar
autistic complaining about units

ok so like I don't know if I've ever seen a more confusing use of units . at least you haven't used the p infix instead of the / in bandwith units .

like you used both upper case and lowercase in units but like I can't say if it was intentional or not ? especially as the letter that is uppercased should be uppercased ?

anyway

1Mb

is theoretically correct but you likely ment either one megabyte (1 MB) or one megibyte (MiB) rather than one megabit (1 Mb)

~325mb/s

95mb/s

and

9mb/s

I will presume you did not intend to write ~325 milibits per second , but ~325 megabits per seconds , though if you have used the 333 333 request count as in the segment you quoted , though to be fair op also made a mistake I think , the number they gave should be 3 exabits per second (3 Eb/s) or 380 terabytes per seconds (TB/s) , but that's because they calculated the number of requests you can make from a 1 gigabit (which is what I assume they ment by gbit) wrong , forgetting to account that a byte is 8 bits , you can only make 416 666 of 4 kB (sorry I'm not checking what would happen if they ment kibibytes sorry I underestimated how demanding this would be but I'm to deep in it now so I'm gonna take that cop-out) requests a second , giving 380 terabits per second (380 Tb/s) or 3.04 terabytes per second (3.04 TB/s) , assuming the entire packet is exactly 114 megabytes (114 MB) which is about 108.7 megibytes (108.7 MiB) . so anyway

packet size theoretical bandwidth
1 Mb 416.7 Gb/s 52.1 GB/s
1 MB 3.3 Tb/s 416.7 GB/s
1 MiB 3.3 Tb/s 416.7 GB/s
300 kb 125.0 Gb/s 15.6 GB/s
300 kB 1000.0 Gb/s 125.0 GB/s
300 kiB 1000.0 Gb/s 125.0 GB/s
30 kb 12.5 Gb/s 1.6 GB/s
30 kB 100.0 Gb/s 12.5 GB/s
30 kiB 100.0 Gb/s 12.5 GB/s

hope that table is ok and all cause im in a rush yeah bye

algernon , to Fediverse in Is Mastodon's Link-Previewing Overloading Servers ?
@algernon@lemmy.ml avatar

...and here I am, running a blog that if it gets 15k hits a second, it won't even bat an eye, and I could run it on a potato. Probably because I don't serve hundreds of megabytes of garbage to visitors. (The preview image is also controllable iirc, so just, like, set it to something reasonably sized.)

moreeni ,

Wait, you're going to tell me you don't actually have to serve bloat on a blog like it's foss? No way!

algernon ,
@algernon@lemmy.ml avatar

I only serve bloat to AI crawlers.

map $http_user_agent $badagent {
  default     0;
  # list of AI crawler user agents in "~crawler 1" format
}

if ($badagent) {
   rewrite ^ /gpt;
}

location /gpt {
  proxy_pass https://courses.cs.washington.edu/courses/cse163/20wi/files/lectures/L04/bee-movie.txt;
}

...is a wonderful thing to put in my nginx config. (you can try curl -Is -H "User-Agent: GPTBot" https://chronicles.mad-scientist.club/robots.txt | grep content-length: to see it in action ;))

delirious_owl ,
@delirious_owl@discuss.online avatar

Your bandwidth bill lol

algernon ,
@algernon@lemmy.ml avatar

I don't think serving 86 kilobytes to AI crawlers will make any difference in my bandwidth use :)

delirious_owl ,
@delirious_owl@discuss.online avatar

Oic its a redirect now

algernon ,
@algernon@lemmy.ml avatar

It's not. It just doesn't get enough hits for that 86k to matter. Fun fact: most AI crawlers hit /robots.txt first, they get served a bee movie script, fail to interpret it, and leave, without crawling further. If I'd let them crawl the entire site, that'd result in about two megabytes of traffic. By serving a 86kb file that doesn't pass as robots.txt and has no links, I actually save bandwidth. Not on a single request, but by preventing a hundred others.

skullgiver , (edited )
@skullgiver@popplesburger.hilciferous.nl avatar

[Thread, post or comment was deleted by the author]

  • Loading...
  • Moonrise2473 ,

    Or serve a gzip bomb (is that possible?)

    ShittyKopper ,
    xilliah , to Privacy in Mozilla Stands Against Google's New Advertising Tech
    @xilliah@beehaw.org avatar

    The time is nigh that I'll get to see a relevant ad. Finally a system complicated enough to able to store that I'm into pc games.

    TheWoozy , to Privacy in Mozilla Stands Against Google's New Advertising Tech

    Google is an advertising company. Their goal is to maximize profit from advertising. Avoiding government regulation is part of that goal. By imposing "good enough" self-regulation they hope keep governments from stepping in. Their solution is definitely better than the currently dying 3rd party cookie free-for-all.

    Mozilla is right to question whether "targeted" ads are a good idea at all. I personally find it easier to ignore non-targeted ads, myself. But, if Mozilla decides not to cooperate & holds out for the Platonic ideal tech, they may cause ad dependent web sites to block Firefox completely. That would not be good for any of us.

    TCB13 , to Privacy in Mozilla Stands Against Google's New Advertising Tech
    @TCB13@lemmy.world avatar

    Questionable ethics corporation #1 stands against questionable ethics mega corporation #1.

    Outtatime ,
    @Outtatime@sh.itjust.works avatar

    Bingo

    RGB3x3 ,

    How is Mozilla questionable?

    TCB13 ,
    @TCB13@lemmy.world avatar
    FriendBesto ,

    Do not know why you are being down voted, you are correct.

    I am thankful Mozilla exists because it provides some choice but if you have to changed the user.js --a non-trivial action for regular end users-- or use a fork like Librewolf, or Mullad Browser or even Tor to maximise Privacy that should mostly come available as an easy opt-in setting out of the box, it educates me that Mozilla is not the angel fanboys would like it to be.

    Also, their telemetry collection is not trivial either, even more so in their Nightly builds, which in fairness is sort of expected. Also, do not forget that FF has pushed XPIs to end users without their consent in the past.

    TCB13 , (edited )
    @TCB13@lemmy.world avatar

    Well, it might be the fanboyism hitting hard. I also like the fact that Mozilla / Firefox exists but it isn't the silver bullet everyone paints.

    People speak very good thing about Firefox but they like to hide and avoid the shady stuff. Firefox is better than most, no double there, but at the same time they do have some shady finances and they also do stuff like adding unique IDs to each installation. I just see someone commenting "oh but download from the FTP and you won't be tracked"... seriously? Isn't adding an ID to the thing available on the installer that 95% people are going to use without opt-out or any warning crossing a line? There's no justification for this.

    Firefox does is a LOT of calling home. Just fire Wireshark alongside it and see how much calling home and even calling 3rd parties it does. From basic ocsp requests to calling Firefox servers and a 3rd party company that does analytics they do it all, even after disabling most stuff in Settings doesn't fix it.

    I know other browsers do it as well, except for Ungoogled or LibreWolf and because of that I’m sticking with them. I would like to avoid programs that need no snitch whenever I open them.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines