• 0 Posts
  • 9 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle



  • You mean for the referer part? Of course you don’t want it for all urls and there’s some legitimate cases. I have that on specific urls where it’s highly unlikely, not every url. E.g. a direct link to a single comment in lemmy, and whitelisting logged-in users. Plus a limit, like >3 times an hour before a ban. It’s already pretty unusual to bookmark a link to a single comment

    It’s a pretty consistent bot pattern, they will go to some subsubpage with no referer with no prior traffic from that ip, and then no other traffic from that ip after that for a bit (since they cycle though ip’s on each request) but you will get a ton of these requests across all ips they use. It was one of the most common patterns i saw when i followed the logs for a while.

    of course having some honeypot url in a hidden link or something gives more reliable results, if you can add such a link, but if you’re hosting some software that you can’t easily add that to, suspicious patterns like the one above can work really well in my experience. Just don’t enforce it right away, have it with the ‘dummy’ action in f2b for a while and double check.

    And I mostly intended that as an example of seeing suspicious traffic in the logs and tailoring a rule to it. Doesn’t take very long and can be very effective.


  • This is the way. I also have rules for hits to url, without a referer, that should never be hit without a referer, with some threshold to account for a user hitting F5. Plus a whitelist of real users (ones that got a 200 on a login endpoint). Mostly the Huawei and Tencent crawlers have fake user agents and no referer. Another thing crawlers don’t do is caching. A user would never download that same .js file 100s of times in a hour, all their devices’ browsers would have cached it. There’s quite a lot of these kinds of patterns that can be used to block bots. Just takes watching the logs a bit to spot them.

    Then there’s ratelimiting and banning ip’s that hit the ratelimit regularly. Use nginx as a reverse proxy, set rate limits for URLs where it makes sense, with some burst set, ban IPs that got rate-limited more than x times in the past y hours based on the rate limit message in the nginx error.log. Might need some fine tuning/tweaking to get the thresholds right but can catch some very spammy bots. Doesn’t help with those that just crawl from 100s of ips but only use each ip once every hour, though.

    Ban based on the bot user agents, for those that set it. Sure, theoretically robots.txt should be the way to deal with that, for well behaved crawlers, but if it’s your homelab and you just don’t want any crawlers, might as well just block those in the firewall the first time you see them.

    Downloading abuse ip lists nightly and banning those, that’s around 60k abusive ip’s gone. At that point you probably need to use nftables directly though instead of iptables or going through ufw, for the sets, as having 60k rules would be a bad idea.

    there’s lists of all datacenter ip ranges out there, so you could block as well, though that’s a pretty nuclear option, so better make sure traffic you want is whitelisted. E.g. for lemmy, you can get a list of the ips of all other instances nightly, so you don’t accidentally block them. Lemmy traffic is very spammy…

    there’s so much that can be done with f2b and a bit of scripting/writing filters


  • Yes a days earning, at least 30.-, at most 3000.- per day, can be converted to equivalent time in jail* or equivalent time doing community work(4 hours community work = 1 day fine). at least 3 days, at most 180 days (more would mandate jail).

    suspended means there’s a trial period where the punishment isn’t enforced and after which it can be fully or partially dropped if the guilty party didn’t commit another crime.

    And in this case it’s 30 days worth of fine, how long the probation period lasts isn’t specified. It’s usually 2-5 years

    *not going to figure out if jail or prison is the right term…



  • In a perfect world, yes.

    In reality, i knew what i did and why i did it, two years ago, after which i never had to touch it again until now, and it takes me 2 hours of searching/fiddling until i remember that weird thing i did 2 years ago…

    and it’s still totally worth it

    Oh or e.g. random env vars in .profile that I’m sure where needed for nvidia on wayland at some point, no clue if they’re still necessary but i won’t touch them unless something breaks. and half of them were probably not neccessary to begin with, but trying all differen’t combinations is tedious…