• 0 Posts
  • 59 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle
  • I may be biased (PhD student here) but I don’t fault them for being as such. Ethics is something that 1) requires formal training 2) requires oversight 3) contains to are different to every person. Quite frankly, it’s not part of their training, never been emphasized as part of their training, and subjective based on cultural experiences.

    What is considered unreasonable risk of harm is going to be different to everybody. To me, if the entire design runs locally and does not collect data for Google’s use then it’s perfectly ethical. That being said, this does not prevent someone else from adding the data collection features. I think the original design of such a system should put in a reasonable amount of effort in stopping that. But if that is done then there’s nothing else to blame them about. The moral responsibility lies with the one who pulled the trigger.

    Should the original designer have anticipated this issue thus never took the first step? Maybe. But that depends on a lot of circumstance that we don’t know so it’s hard to predict anything meaningful.

    As for the more “harm than good” analysis, I absolutely detest that sort of reasoning since it attempts to quantify social utility in a pure mathematical sense. If this reasoning holds, an extreme example would be justifying harm to any minority group as long as it maximizes benefit for society. Basically Omelas. I believe a good quantitative reasoning would be checking if harm is introduced to ANY group of people, as long as that’s the case the whole is considered unethical.


  • This is common for companies that like to hire PhDs.

    PhDs like to work on interesting and challenging projects.

    With nobody to reign them in, they do all kinds of cool stuff that makes no money (e.g. Intel Optane and transactional memory).

    Designing a realtime scam analysis tool with resource constraints is interesting enough to be greenlit but makes no money.

    Once released, they’ll move on to the next big challenge, and when nobody is there to maintain their work, it will be silently dropped by Google.

    I’m willing to bet more than 70% of the Google graveyard comes from projects like these.








  • An alternative definition: a real-time system is a system where the correctness of the computation depends on a deadline. For example, if I have a drone checking “with my current location + velocity will I crash into the wall in 5 seconds?”, the answer will be worthless if the system responds 10 seconds later.

    A real-time kernel is an operating system that makes it easier to build such systems. The main difference is that they offer lower latency than a usual OS for your one critical program. The OS will try to give that program as much priority as it wants (to the detriment of everything else) and immediately handle all signals ASAP (instead of coalescing/combining them to reduce overhead)

    Linux has real-time priority scheduling as an optional feature. Lowering latency does not always result in reduced overhead or higher throughout. This allows system builders to design RT systems (such as audio processing systems, robots, drones, etc) to utilize these features without annoying the hell out of everyone else.









  • StarDreamer@lemmy.blahaj.zonetolinuxmemes@lemmy.worldsystemdeez nuts
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    systemd tries to unify a Wild West situation where everyone, their crazy uncle, and their shotgun-dual-wielding Grandma has a different set of boot-time scripts. Instead of custom 200-line shell scripts now you have a standard simple syntax that takes 5 minutes to learn.

    Downside is now certain complicated stuff that was 1 line need multiple files worth of workarounds to work. Additionally, any custom scripts need to be rewritten as a systemd service (assuming you don’t use the compat mode).

    People are angry that it’s not the same as before and they need to rewrite any custom tweaks they have. It’s like learning to drive manual for years, wonder why the heck there is a need for auto, then realizing nobody is producing manual cars anymore.




  • Pretty sure expiry is handled by the local crowdsec daemon, so it should automatically revoke rules once a set time is reached.

    At least that’s the case with the iptables and nginx bouncers (4 hour ban for probing). I would assume that it’s the same for the cloudflare one.

    Alternatively, maybe look into running two bouncers (1 local, 1 CF)? The CF one filters out most bot traffic, and if some still get through then you block them locally?