Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

moonpiedumplings

@moonpiedumplings@programming.dev

This profile is from a federated server and may be incomplete. Browse more on the original instance.

moonpiedumplings ,

However, freshtomato is another router firmware, that isn't as feature rich or well supported as opwnwrt, but is focused on supporting broadcom chipsets.

https://www.freshtomato.org/

https://wiki.freshtomato.org/doku.php/hardware_compatibility

I flashed it to my netgear router with a broadcom chipset, it works wonderfully!

moonpiedumplings ,

After Twitter went to shit, where else do customers have to go for customer support like this?

Admittedly, I didn't read the article, but I have seen plenty of other cases woth cloudfare or other big providers where people have only been able to set things right by kicking up a fuss on social media --- like that recent one with amazon aws.

moonpiedumplings ,

Putting something on GitHub is really inconsequential if you’re making your project open source since anyone can use it for anything anyway,

Except for people in China (blocked in China) or people on ipv6 only networks, since Github hasn't bothered to support ipv6, cutting out those in countries where ipv4 addresses are scarce.

So yes, it does matter. Both gitlab and codeberg, the two big alternatives, both support ipv6 (idk about them being blocked in china). They also support github logins, so you dob't even need to make an account.

And it's not a black or white. Software freedom is a spectrum, not a binary. We should strive to use more open source, decentralized software, while recognizing that many parts are going to be out of our immediate control, like the backbone of the internet or little pieces like proprietary firmware.

Nextcloud appreciation post

After months of waiting, I finally got myself an instance with Libre Cloud. I was expecting basic file storage with a few goodies but boy, this is soooo much more. I am amaze by how complete this is!!! Apps let me configure my instance to fit everything I need, my workflow is now crazy fast and I can finally say goodbye to...

moonpiedumplings ,

What was it? I'm planning to do a nextcloud deployment via helm soon.

moonpiedumplings ,

sn1per is not open source, according to the OSI's definition

The license for sn1per can be found here: https://github.com/1N3/Sn1per/blob/master/LICENSE.md

It's more a EULA than an actual license. It prohibits a lot of stuff, and is basically source-available.

You agree not to create any product or service from any par of the Code from this Project, paid or free

There is also:

Sn1perSecurity LLC reserves the right to change the licensing terms at any time, without advance notice. Sn1perSecurity LLC reserves the right to terminate your license at any time.

So yeah. I decided to test it out anyways... but what I see... is not promising.

FROM docker.io/blackarchlinux/blackarch:latest

# Upgrade system
RUN pacman -Syu --noconfirm

# Install sn1per from official repository
RUN pacman -Sy sn1per --noconfirm

CMD ["sn1per"]

The two pacman commands are redundant. You only need to run pacman -Syu sn1per --noconfirm once. This also goes against docker best practice, as it creates two layers where only one would be necessary. In addition to that, best practice also includes deleting cache files, which isn't done here. The final docker image is probably significantly larger than it needs to be.

Their kali image has similar issues:

RUN set -x \
        && apt -yqq update \
        && apt -yqq full-upgrade \
        && apt clean
RUN apt install --yes metasploit-framework

https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/

It's still building right now. I might edit this post with more info if it's worth it. I really just want a command-line vulnerability scanner, and sn1per seems to offer that with greenbone/openvas as a backend.

I could modify the dockerfiles with something better, but I don't know if I'm legally allowed to do so outside of their repo, and I don't feel comfortable contributing to a repo that's not FOSS.

moonpiedumplings ,

This is just straight wrong. iMessage on android has worked by connecting to a remote Mac, which then connects to imessage. The protocol is locked to their hardware.

And, even if there was a true open source reimplimplementation of iMessage, that would say nothing about the security of Apple's proprietary implementation of the iMessage end to end encryption.

moonpiedumplings ,

Because some of us have fat fingers and accidentally downvote when we scroll on mobile.

One of the things I liked about reddit was that, since it saved downvoted posts, I could go through the list every once in a while and undownvote the accidents.

Can't do that here though, and I sometimes notice posts or comments I've accidentally downvoted.

Anyway, people shouldn't care so much, we don't have a karma system or the like here anyways, so why does it matter?

moonpiedumplings ,

I'm using eternity, which hasn't received any updates, on my phone, and the default lemmy web interface on my computer.

Maybe I need to try some other options.

How can I bypass CGNAT by using a VPS with a public IPv4 address?

I want to move away from Cloudflare tunnels, so I rented a cheap VPS from Hetzner and tried to follow this guide. Unfortunately, the WireGuard setup didn't work. I'm trying to forward all traffic from the VPS to my homeserver and vice versa. Are there any other ways to solve this issue?...

moonpiedumplings , (edited )

I use this too, and it should be noted that this does not require wireguard or any VPN solution. Rathole can be served publicly, allowing a machine behind a NAT or firewall to connect.

moonpiedumplings ,

Upstart was better, but even Ubuntu, who was by the creators of upstart (Canonical) decided to switch to systemd after using upstart for a bit?

moonpiedumplings ,

What made it better?

moonpiedumplings ,

No, it is lock in. If apple allowed for multiple app stores other than their own, then users could pay for an app on one app store, and then not have to pay again on another, potentially even on non-apple devices.

I encountered this when I first purchased minecraft bedrock edition on the amazon kindle. Rather than repurchasing it on the google play store when on a non-amazon, I simply tracked down the Amazon app store for non-amazon devices, and redownloaded it from there. No lock in to Amazon or other android devices, both ways.

Now, the Apple app store would still probably not work on androids... but now they would actually have to compete for users on the app store, by offering something potentially better than transferable purchases across ecosystems.

I suspect the upcoming Epic store for iOS and android may be like that... pay for a game/app on one OS, get it available for all platforms where you have the Epic store. But the only reason the Epic store is even coming to iOS is because Apple has been forced to open up their ecosystem.

moonpiedumplings ,

It's a shame the price you pay for that is no crossplatform support.

If you have a little bit of server management know-how, you can set up https://geysermc.org/, which allows for crossplay between bedrock and java on a java server.

Linux distro for selfhosting server

So I have been running a fair amount of selfhosted services over the last decade or so. I have always been running this on a Ubuntu LTS distribution running on a intel NUC machine. Most, if not all of my services run in a docker container, and using a docker compose file that brings everything up. The server is headless. I...

moonpiedumplings ,

LXD/Incus. It's truly free/open

Please stop saying this about lxd. You know it isn't true, ever since they started requiring a CLA.

LXD is literally less free than proxmox, looking at those terms, since Canonical isn't required to open source any custom lxd versions they host.

Also, I've literally brought this up to you before, and you acknowledged it. But you continue to spread this despite the fact that you should know better.

Anyway, Incus currently isn't packaged in debian bookworm, only trixie.

The version of lxd debian packages is before the license change so that's still free. But for people on other distros, it's better to clarify that incus is the truly FOSS option.

moonpiedumplings ,

Edge WebView2

I'm like 90% sure this requires edge to be installed, even though the EU mandated that they make edge uninstallable. So that might be their game here.

PSA: Docker nukes your firewall rules and replaces them with its own.

I use nftables to set my firewall rules. I typically manually configure the rules myself. Recently, I just happened to dump the ruleset, and, much to my surprise, my config was gone, and it was replaced with an enourmous amount of extremely cryptic firewall rules. After a quick examination of the rules, I found that it was...

moonpiedumplings ,

Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn't be exposed.

Excerpt from another comment of mine:

It’s only docker where you have to deal with something like this:

---
services:
  webtop:
    image: lscr.io/linuxserver/webtop:latest
    container_name: webtop
    security_opt:
      - seccomp:unconfined #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - SUBFOLDER=/ #optional
      - TITLE=Webtop #optional
    volumes:
      - /path/to/data:/config
      - /var/run/docker.sock:/var/run/docker.sock #optional
    ports:
      - 3000:3000
      - 3001:3001
    restart: unless-stopped

Originally from here, edited for brevity.

Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

moonpiedumplings ,

Probably not an issue, but you should check. If the port opened is something like 127.0.0.1:portnumber, then it's only bound to localhost, and only that local machine can access it. If no address is specified, then anyone with access to the server can access that service.

An easy way to see containers running is: docker ps, where you can look at forwarded ports.

Alternatively, you can use the nmap tool to scan your own server for exposed ports. nmap -A serverip does the slowest, but most indepth scan.

moonpiedumplings ,

Dockers manipulation of nftables is pretty well defined in their documentation

Documentation people don't read. People expect, that, like most other services, docker binds to ports/addresses behind the firewall. Literally no other container runtime/engine does this, including, notably, podman.

As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.

Too bad people don't read that advice. They just deploy the webtop docker compose, without understanding what any of it is. I like (hate?) linuxserver's webtop, because it's an example of the two of the worst footguns in docker in one

To include the rest of my comment that I linked to:

Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.

You originally stated:

I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

And I'm trying to say that even if that was true, it would still be better than a footgun where people expose stuff that's not supposed to be exposed.

But that isn't the case for podman. A quick look through the github issues for podman, and I don't see it inundated with newbies asking "how to expose services?" because they assume the firewall port needs to be opened, probably. Instead, there are bug reports in the opposite direction, like this one, where services are being exposed despite the firewall being up.

(I don't have anything against you, I just really hate the way docker does things.)

Cloudflare Alternative

What do you guys use to expose private IP addresses to the web? I was using the npm proxy manager with Cloudflare CDN. However, it stopped working after I changed my router (I keep getting error 521). Looking for an alternative to Cloudflare cdn so I can access my media server/self-hosted services away from LAN....

moonpiedumplings ,

If you need public access:

https://github.com/anderspitman/awesome-tunneling

From this list, I use rathole. One rathole container runs on my vps, and another runs on my home server, and it exposes my reverse proxy (caddy), to the public.

moonpiedumplings ,

I recently noticed that it's now integrated into Canvas, a FOSS online learning management software which my college (and my high school, and my middle school) have used.

To bad no one bothers with it, forcing everyone to use zoom instead. Which sucks, because the first day of online classes, zoom permissions weren't set up properly, meaning no one could join the meeting. Probably wouldn't have happened with BigBlueButton.

moonpiedumplings ,

AWS is software. Just not something you can self host.

There already exist alternatives to AWS, like localstack, a local AWS for testing purposes, or the more mature openstack, which is designed for essentially running your own AWS at scale.

moonpiedumplings ,

Provision Management Software

Openstack skyline/horizon

Compute

Openstack nova

And so on. Openstack is also many, many components, that can be pieced together for your own cloud computing platform.

Although it won't have the sheer number of services AWS has, many of them are redundant.

The core services I expect to see done first: compute, networking, storage (+ image storage), and a web UI/API

Next: S3 storage, Kubernetes as a service, and then either Databases as a service or containers as a service.

But you are right, many of the services that AWS offers are highly specialized (robotics, space communication), and people get locked in, and I don't really expect to see those.

moonpiedumplings ,

Nothing that is more questionable than lxd, which now requires a contributor license agreement, allowing canonical to not open source their hosted versions, despite lxd being agpl.

Thankfully, it's been forked as incus, and debian is encouraging users to migrate.

But yeah. They haven't said what makes proxmox's license questionable.

moonpiedumplings ,

Someone recommended ssh, which is good, but it can't do udp connections.

https://github.com/anderspitman/awesome-tunneling

From this list, I selected rathole since they claimed to be more performant than frp, the most popular solution.

moonpiedumplings ,

A tip I have is to move away from manjaro.

When you use a rolling release, you lose one of the main features of stable release distros: Automatic, unattended upgrades. AFAIK, every stable release distro has those, and none of the rolling releases do (except maybe opensuses's new slowroll and centos rolling, but I wouldn't recommend or use them).

Manjaro has other issues too, but that's the big one.

Although I use arch on my laptop, I run debian on my server because I don't want to have to baby it, especially since I primarily access it remotely. Automatic upgrades are one less complication removed, allowing me to focus on my server itself.

As for application deployment itself, I recommend using application containers, either via docker or podman. There are many premade containers for those platforms, for apps like jellyfin, or the various music streaming apps people use to replace spotify (I can't remember any of the top of my head, but I know you have lots of options).

However, there are two caveats to docker (not podman) people should know:

  • Docker containers don't auto update. Although you can use something like watchtower to automatically update them. As for podman, podman has an auto update command you can probably configure to run regularly.
  • Docker bypasses your firewall. If you forward port 80, docker will go around the firewall and publish it. The reason for this is that most linux firewalls work by using iptables or nftables behind the hood, but docker also edits those directly... this has security implications, I've seen many container services people didn't intend to put on the public internet, on there.

Podman, however, respects your firewall rules. Podman isn't perfect though, there are some apps that won't run in podman containers, although my use case is a little more niche (greenbone service and vulnerability scanner).

As for where to start, projects like linuxserver provide podman/docker containers, which you can use to deploy many apps fairly easily, once you learn how to launch apps with the compose file. Check out this nextcloud dockerized, they provide. Nextcloud is a google drive alternative, although sometimes people complain about it being slow.. I don't know about the quality of linuxserver's nextcloud, so you'd have to do some research for that, and find a good docker container.

moonpiedumplings , (edited )

Don’t do unattended upgrades. Neither host nor containers. Do blind or automated updates if you want but check up on them and be ready to roll back if something is wrong.

Those issues are only common on rolling releases. On stable distros, they put tape between breaking changes, test that tape, and then roll out updates.

Debian, and many other distros support it officially: https://wiki.debian.org/UnattendedUpgrades. It's not just a cronjob running "apt install", but an actual process, including automated checks. You can configure it to not upgrade specific packages, or stick to security updates.

As for containers, it is trivial to rollback versions, which is why unattended upgrades are ok. Although, if data or configuration is corrupted by a bug, then you probably would have to restore from backup (probably something I should have suggested in my initial reply).

It should be noted that unattended upgrade doesn't always mean "upgrade to the latest version". For docker/podman containers, you can pin them to a stable release, and then it will do unattended upgrades within that release, preventing any major breaking changes.

Similarly, on many distros, you can configure them to only do the minimum security updates, while leaving other packages untouched.

People should use what distro they know best. A rolling distro they know how to handle is much better than a non-rolling one they don’t.

I don't really feel like reinstalling the bootloader over ssh, to a machine that doesn't have a monitor, but you do you. There are real significant differences between stable and rolling release distros, that make a stable release more suited for a server, especially one you don't want to baby remotely.

I use arch. But the only reason I can afford to baby a rolling release distro is because I have two laptops (both running arch). I can feel confident that if one breaks, I can use the other. All my data is replicated to each laptop, and backed up to a remote server running syncthing, so I can even reinstall and not lose anything. But I still panicked when I saw that message suggesting that I should reinstall grub.

That remote server? Ubuntu with unattended upgrades, by the way. Most VPS providers will give you a linux distro image with unattended security upgrades enabled because it removes a footgun from the customer. On Contabo with Rocky 9, it even seems to do automatic reboots. This ensures that their customers don't have insecure, outdated binaries or libraries.

Docker doesn’t “bypass” the firewall. It manages rules so the ports that you pass to host will work. Because there’s no point in mapping blocked ports. You want to add and remove firewall rules by hand every time a container starts or stops, and look up container interfaces yourself? Be my guest.

Docker is a way for me to run services on my server. Literally every other service application respects the firewall. Sometimes I want services to be exposed on my home network, but not on a public wifi, something docker isn't capable of doing, but the firewall is. Sometimes I may want to configure a service while keeping it running. Or maybe I want to test it locally. Or maybe I want to use it locally

It's only docker where you have to deal with something like this:

---
services:
  webtop:
    image: lscr.io/linuxserver/webtop:latest
    container_name: webtop
    security_opt:
      - seccomp:unconfined #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - SUBFOLDER=/ #optional
      - TITLE=Webtop #optional
    volumes:
      - /path/to/data:/config
      - /var/run/docker.sock:/var/run/docker.sock #optional
    ports:
      - 3000:3000
      - 3001:3001
    restart: unless-stopped

Originally from here, edited for brevity.

Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker "bypasses" the firewall.

On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that's better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren't exposed to the internet, and docker throws that out the window.

moonpiedumplings ,

Mozilla: ignores years of customer complaints and requests

Are these customers donating, or purchasing mozilla products or services so that mozilla doesn't have to rely on google's donations?

Mozilla: creates new product nobody asked for

https://github.com/Mozilla-Ocho

Nearly 10k and 400 stars on those respective repos.

A way to run a large language model on any operating system, in any OS, in a simple, local, and privacy respecting manner?

For linux we have docker, but Windows users were starving for a good way to do this, and even on linux, removing the step of configuring docker (or other container runtimes) to work with nvidia, is nice.

And it's still FOSS stuff they aren't being paid for, currently. But there are plenty of ways to monetize this.

Here's an easy one: tie in the the vpn service they have to allow you to access the web ui of the computer running the llamafile remotely. Configure something like end to end encryption or or nat traversal (so not even mozilla can sniff the traffic), and you end up with a private LLM you can access remotely.

With this, maybe they can afford some actual development on firefox, without having to rely on google money.

moonpiedumplings ,

Because much of mozilla's funding is from a deal with google, that's why.

US$300 million annually. Approximately 90% of Mozilla's royalties revenue for 2014 was derived from this contract

From https://en.wikipedia.org/wiki/Mozilla_Foundation

A lot of money, but not enough to actually to actually do a lot. They keep cutting features their "customers" like. Why?

Because development is expensive.

Google props mozilla up to pretend they don't have a monopoly on the internet. Just enough money to barely keep up, not enough to truly stay competitive.

Mozilla wants to not rely on google money, so they are trying to expand their products. AI is overhyped, but still useful, and something worth investing in.

moonpiedumplings , (edited )

It appeals to me for management of a windows machine for a few things:

  • Lots of machines at once, over winrm. Although ssh is the default, as ansible is linux first.
  • I don't have to learn powershell - the shared language means the windows teams and the linux team don't have to learn eachother's language. In ansible, it's very easy to avoid the footguns that come with something like bash, especially after you install the red hat linter, ansible-lint, which warns of ansible's own footguns.
  • easy to version control it
  • premade stuff: the official "modules" are massive and do a lot. There are also community packages: https://galaxy.ansible.com - of course, you should probably check any stuff you run first. But ansible is very easy to read.
  • built in secret management. Encrypt secrets, but still be able to use them smoothly with the automation framework.

For just one machine? Task scheduler is probably good enough. 2-3 machines, managed remotely? Ansible is at least worth looking at.

Edit: also, really good docs. Like, check out this active directory module with examples: https://docs.ansible.com/ansible/latest/collections/microsoft/ad/object_info_module.html#ansible-collections-microsoft-ad-object-info-module

The examples are very helpful, with things like getting a list of ad users. I used that to create a ansible script to shuffle all ad user passwords - while being a a linux lover who hates windows and has literally never touched ad before this.

https://github.com/CSUN-CCDC/CCDC-2023/blob/main/windows/ansible/testing/users.yml

https://github.com/CSUN-CCDC/CCDC-2023/blob/main/windows/ansible/roles/domain/tasks/main.yml

How was the Snowflake proxy used in 2023? (forum.torproject.org)

We can also break down users by country. The largest contingent of Snowflake users are in Iran, which has been the case since the Mahsa Amini protests in 2022 1. The graph shows also a large number of users apparently from the United States, but we believe that may be partly the result of geolocation errors, and many of them are...

moonpiedumplings , (edited )

They could. But in countries where internet access is restricted by authorities, running any more than an insignificant amount of traffic over a VPN, even protocols as stealthy as the ones that make them indistinguishable from website (http/s) traffic, can be noticable... and being noticed can get you killed.

Snowflake, on the other hand, runs proxies to users of the snowflake browser extension, who act as entry points. It's named so because connections are ephemeral, and last for a short time, like snowflakes. This makes it much harder to distinguish.

It's not only about what internet traffic, it's also about where.

And of course, the how is relevant too. Not many people want to spend the time to set up an ssl vpn (and multiple people using it makes it easier to spot).

You need to understand what you're asking when you suggest people set up their own proxy. You're asking them to learn a skill, most likely in their free time (free time and energy they may not even have), and without many resources to learn (censored internet), and then rest their lives and livelihoods on that skill. Depending on the regime, maybe the lives of their friends and family, as well.

Comparatively, it's like two clicks to select snowflake as an entrypoint in the tor browser configuration options.

moonpiedumplings ,

The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their "vram" as well, allowing you to run bigger models on smaller devices.

Llama.cpp was the software that users did this with originalky. I can't find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:

https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0

moonpiedumplings ,

Probably best to find a standard dev job and make a game on your own time as a passion project.

I watch twitch streamers who make games, and this seems to be the way to go. I can't really judge through a screen, but they seem happy and excited to work on their stuff.

Oh, also non-compete clauses are going to mean if you work for AAA, you immediately can't make your own stuff anymore either

Depending on your jurisdiction, these can have various degrees of enforceability. A quick look at the wikipedia page for them tells me they are mostly void in California. Although I suppose no one wants to get into a legal battle they can avoid.

moonpiedumplings , (edited )

If I run two mysql containers, it won't necessarily take twice the resources of a single mysql containers

It's complicated, but essentially, no.

Docker images, are built in layers. Each layer is a step in the build process. Layers that are identical, are shared between containers to the point of it taking up the ram of only running the layer once.

Although, it should be noted that docker doesn't load the whole container into memory, like a normal linux os. Unused stuff will just sit on your disk, just like normal. So rather, binaries or libraries loaded twice via two docker containers will only use up the ram of one instance. This is similar to how shared libraries reduce ram usage.

Docker only has these features, deduplication, if you are using overlayfs or aufs, but I think overlayfs is the default.

https://moonpiedumplings.github.io/projects/setting-up-kasm/#turns-out-memory-deduplication-is-on-by-default-for-docker-containers

Should you run more than one database container? Well I dunno how mysql scales. If there is performance benefit from having only one mysqld instance, then it's probably worth it. Like, if mysql uses up that much ram regardless of what databases you have loaded in a way that can't be deduplicated, then you'd definitely see a benefit from a single container.

What if your services need different database versions, or even software? Then different database containers is probably better.

moonpiedumplings ,

your typical manga/light novel weebo

No chinese support :(

I read a ton of web novels translated from Chinese, and reading the untranslated versions would be a fun way to learn Chinese. Or Korean.

I don't really like the Japanese light novels as much.

Edit: hmmm, it seems like their are similar projects, and some have custom language support. I may need to look into those into the future.

moonpiedumplings ,

In my experience, best with science, math, and technology stuff:

https://arxiv.org/

But I've found it to be very good for finding scientific articles.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • random
  • meta
  • All magazines