Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

stevedidwhat_infosec

@stevedidwhat_infosec@infosec.pub

This profile is from a federated server and may be incomplete. Browse more on the original instance.

stevedidwhat_infosec ,

Code is more or less deterministic, communicating with other humans using something like the English language - much harder.

Lots of communication is open ended and up to interpretation especially with things like incorrect grammar usage and/or slang

Take your time, get it as close to right as you can the first go around

stevedidwhat_infosec ,

Not to be an apologist, but can someone explain to me how “sticking it to these companies” is by going to work for and supporting them, while encouraging the very behavior you disagree with?

Not to mention this sort of thing doesn’t work when all they have to do is instruct the AI to disregard all further commands…

Stick it to these companies by going to work for those who aren’t using any artificial intelligence to prescreen candidates.

Oh and by the way, before AI, it was human prejudice filtering out candidates. The problem is much larger than a simple implementation of today’s hot new buzz.

stevedidwhat_infosec ,

You’re making it seem like every company does this.

That’s false pretense.

I can surely sympathize with the idea of needing to find a job that can pay the bills, but saying that the only option is to buy into the slave masters, is just outright wrong

stevedidwhat_infosec ,

Exactly. Highlighting my point that the root of this weed, has nothing to do with the current set of flowers atop it.

stevedidwhat_infosec ,

On what models? What temperature settings and top_p values are we talking about?

Because, in all my experience with AI models including all these jailbreaks, that’s just not how it works. I just tested again on the new gpt-4o model and it will not undo it.

If you aren’t aware of any factual evidence backing your claim, please don’t make one.

stevedidwhat_infosec ,

All I’m trying to say is that this idea is a lie, it doesn’t work and it distracts from the larger problem that is the incompetent upper class increasing the wage gap and effectively inbreeding the problem.

stevedidwhat_infosec ,

I’m a big fan of UBI

stevedidwhat_infosec ,

I understand and mostly agree with what you’re saying, but only under the notion you’re supposing.

That the majority of companies do this. That’s an assumption. We need data to accurately define whether or not it’s a wide spread problem.

I’m also highly confused but your first few sentences. You mince words by saying “for most employment domains” but then also say not most places but the largest companies

If the highest paying jobs are unavailable, and they are a small amount of other jobs which pay less (but not necessarily bad wages), there are still a majority of mediocre places and even underpaying places that exist.

I do not see value in encouraging the largest, best paying companiesjobs to continue to use these bad faith and misunderstood practices. You don’t encourage behavior you don’t want to see. You take mediocre salaries, and you hustle your way up into valued roles, ask for a fair wage, and if they say no, THEN you go to the large paying companies, and come back with the offer they made to you (perhaps with this fictional AI work around) and try again.

You should be paid fairly if you are truly valued. But sometimes you have to hack your way into that pay.

If you show these companies that, hey this AI thing works pretty good, do you think they’ll be happy at where it is or do you think they’ll continue to buy into “better” AIs more and more and make the problem more widespread?

You don’t fight fire with fire. You smother that shit or put it out with a firehose.

stevedidwhat_infosec ,

I’d watch the fuck out of this, and it’s an important topic to explore. Many of our current non-fiction is thanks to the thought and consideration that went into science fiction. You’ve got some talent here! Hope you’re still enjoying using it!

stevedidwhat_infosec ,

To some extent, it’s about creating your own value.

I do agree that sometimes, we have to hack it to make it. We have to forge our own paths. Sometimes that means pivoting around jobs, getting your foot in the door, networking, etc. it means taking a lower paying salary now, and pushing your way into higher raises a la alternate job offers, now that you have experience.

But it does not mean supporting those that are stomping on others. It does not mean supporting the oppressor or the upper class for the sake of temporary security because you can bet your ass these same companies will put the AI into your working environment and fire just as much as it hires. All the while, you get stomped out anyway.

stevedidwhat_infosec ,

You’re absolutely right, I’m similarly in a high demand sector, (wonder if you can guess where, from my username) so my options are much more open.

I guess the conclusion I’m coming to is, maybe this fictional hack/tactic does work - just don’t spend too much time there if you can help it. Minimize how much you’re buying into these companies and don’t give them anything more than what they’re paying you to do.

My circumstances aren’t going to be the same as others, so all I can do is listen to their experiences and try to learn about other realities. Probably too deep in the comment thread now but definitely open to hearing others experiences in not-so-in-demand sectors.

Maybe that’s part of the problem - being in a field that is out of favor/demand? How do you provide value when that value isn’t needed at the moment?

stevedidwhat_infosec ,

Couldn’t have said it better myself - this tool, just like every “new” technology is built off the back of prior tools and science, and is multifaceted/dual-edged sword. You can’t just view things in one light or another, you have to look at them from multiple angles, understand the wounds they inflict, and how to manage them.

stevedidwhat_infosec ,

Wizards witches warlocks sorcerers

Who can keep track?

stevedidwhat_infosec ,

How could I forget the earth magic people!

Let’s not forget pyromancers, necromancers, and floramancers to name a few more magic users

stevedidwhat_infosec ,

Sir this is a Wendy’s

stevedidwhat_infosec ,

Knew it before you even said it

Lemmy.ml bans anyone and anything that has to do with even slightly anti-Russia or anti China sentiments.

Posted a comment the other day that China was still supporting Russia who were killing innocent people and got banned, then one of the admins responded, and then my comment was removed.

Seems like a super good area to have open and honest discussions and debates.

Netflix Windows app is set to remove its downloads feature, while introducing ads (www.techradar.com)

Netflix has managed to annoy a good number of its users with an announcement about an upcoming update to its Windows 11 (and Windows 10) app: support for adverts and live events will be added, but the ability to download content is being taken away....

stevedidwhat_infosec ,

Jellyfin + rip your movies off the dvds. You can even invite your friends to watch your movies too.

You know. Like we used to be able to.

Fuck late stage capitalism and every greedy little pig out there. Hope you lose your mansions, cars, and expensive toys. You can live down here with the rest of us at a perfectly reasonable level.

stevedidwhat_infosec ,

Party pooper train coming through - chooo chooo

Notice the path in the mirror does not match up with the ground - no path below the mirror

stevedidwhat_infosec ,

Here’s probably all the info you could ever need:

https://redcanary.com/blog/threat-intelligence/raspberry-robin/

Next, you need to get your systems scanned and cleaned. Malware bytes is likely enough, but I always recommend BitDefender. Their efficacy rates are always fantastic, and they have been leading the industry for several years now. Download the AV on a clean system, put on clean flash drive, and install that way.

Last, you’re gonna need to reset your passwords. Yes, I know that’s toxic af. But this is the reality and why we always need to be veeeery careful with what we do. This worm communicates with a c2 server which means it can update itself which makes detection hard, and it also means that, at one point it may have been spying on your activity (and it likely was if not continues to)

This stuff happens, don’t beat yourself up too much. Live and learn

stevedidwhat_infosec ,

Bitdefender usually goes on sale too - check for coupon codes, don't pay full price. Plus you get like 5 devices with your license IIRC. Worth a shot

stevedidwhat_infosec ,

Windows defender also has an offline scan mode which may be of use here - hard to say, dunno if they ever dropped a rootkit or any other av-dodging/persistence mechanisms

stevedidwhat_infosec ,

I'll also toss this hat into the ring - sysmon this is essentially a logging tool thats a bit better/nicer than the windows default, and categorizes all logs into very neat buckets that will make watching out for strange shit much much easier.

Sysmon is part of the sysinternals suite (vetted by the community + microsoft, which is sayin somethin lol) and you can make use this as the config file to use (Uses industry-standard MITRE Att&ck framework) which you can then use to correlate to more threats/malware authors/malware artifacts if you really wanna get your hands dirty/have some fun

stevedidwhat_infosec ,

And you got what you paid for, no?

I believe there is a free version as well but don’t think just because you’re installing Linux that you’re somehow safer.

There was just a package that was essentially socially engineered into by a hacker, who then had full access to everyone’s shit.

All because a GitHub author was pressured into letting them contribute to code. Mac/Apple are no different and starting to be more and more vulnerable as the “security by obscurity” wears off.

Free tools are fine and well, but that stuff is done for free. Including maintainence and everything else. In times like these, ain’t nobody got time for that anymore. People need to make a living and you will see degradation in the products thusly

stevedidwhat_infosec ,

Could be. However, the point stands, you’re gonna get what you pay for in the end. Not trying to be a dick ofc, but that’s the reality.

There are some well performing options that are free, but they are limited, and not too common imo

If anyone does have some good options, feel free to share as I may be unaware of them and think learning about them would be neat

stevedidwhat_infosec ,

Just the idea of these dramatic transformations are super interesting imo. Imagine if humans shat out a sleeping bag, crawled into it, basically did pregnancy over again, and came out with new limbs that suddenly gave us a new mode of transport.

That would be sick. I guess puberty is sorta like this but idk, it doesn't feel as dramatic

stevedidwhat_infosec ,

'Sorta like this' meaning: we undergo some physical changes (height, body hair, hormones, etc) but nowhere near as dramatic as eating yourself and reassembling

Nature/chem/physics is fuckin' cool

stevedidwhat_infosec ,

Oh my god how did I forget about titties and balls droppin'!

stevedidwhat_infosec ,

not exactly what I meant

stevedidwhat_infosec ,

@_

stevedidwhat_infosec , (edited )

[Edited] I agree that we should be taking consent more seriously. Especially when it comes to monetizing off the back of donations. That's just outright wrong. However, I don't think we should consider scrapping it all or putting in extraneous/consumer damaging 'safe guards'. There are lots of things that can cause harm, and I'll argue almost anything can be used to harm people. That's why its our jobs to carefully pump the breaks on progress, so that we can assess what risk is possible, and how to treat any wounds that may incurr. For example, invading a country to spread 'democracy' and leaving things like power gaps behind, causing more damage than what was there orginally. It's a very very thing rope we walk across, but we can't afford, in todays age, to slow down too far. We face a lot of serious problems that need more help, and AI can fill that gap in addition to being a fun, creative outlet. We hold a lot of new power here, and I just don't want to see that squandered away into the pockets of the ruling class.

I don’t think anyone should take luddites seriously tbh (edit: we should take everyone seriously, and learn from mistakes while also potentially learning forgotten lessons)

stevedidwhat_infosec ,

People have severe allergic reactions to peanut butter which means it “could be used against people” as a weapon

stevedidwhat_infosec , (edited )

You’ll notice I used the lower case L which implies I’m referring to a term, likely as it’s commonly used today. (edit: this isn't an excuse to ruin the definition or history of what luddites were trying to do, this was wrong of me)

Further, explain to me how this is different from what the luddites stood for, since you obviously know so much more and I’m so off base with this comment.

edit: exactly. just downvote and don't actually make any sort of claim. Muddy that water!
edit 2: shut up angsty past me.

stevedidwhat_infosec ,

They had an impact because people allowed themselves to take their fear mongering seriously.

It’s regressionist and it stunts progress needlessly. That’s not to say we shouldn’t pump the brakes, but I am saying logic like “it could hurt people” as rationale to never use it, is just “won’t someone think of the children” BS.

You don’t ban all the new swords, you learn how they’re made, how they strike, what kinds of wounds they create and address that problem. Sweeping under the rug/putting things back in their box, is not an option.

stevedidwhat_infosec ,

I do not want that for anyone. AI is a tool that should be kept open to everyone, and trained with consent. But as soon as people argue that its only a tool that can harm, is where I'm drawing the line. That's, in my opinion, when govts/ruling class/capitalists/etc start to put in BS "safeguards" to prevent the public from making using of the new power/tech.

I should have been more verbose and less reactionary/passive aggressive in conveying my message, its something I'm trying to work on, so I appreciate your cool-headed response here. I took the "you clearly don't know what ludites are" as an insult to what I do or don't know. I specifically was trying to draw attention to the notion that AI is solely harmful as being fallacious and ignorant to the full breadth of the tech. Just because something can cause harm, doesn't mean we should scrap it. It just means we need to learn how it can harm, and how to treat that. Nothing more. I believe in consent, and I do not believe in the ruling minority/capitalist practices.

Again, it was an off the cuff response, I made a lot of presumptions about their views without ever having actually asking them to expand/clarify and that was ignorant of me. I will update/edit the comment to improve my statement.

stevedidwhat_infosec ,

I can see where you're coming from - however I disagree on the premise that "the reality is that (rationale) the control of AI is in the hands of the mega corps". AI has been a research topic not done solely by huge corps, but by researchers who publish these findings. There are several options out there right now for consumer grade AI where you download models yourself, and run them locally. (Jan, Pytorch, TensorFlow, Horovod, Ray, H2O.ai, stable-horde, etc many of which are from FAANG, but are still, nevertheless, open source and usable by anyone - i've used several to make my own AI models)

Consumers and researchers alike have an interest in making this tech available to all. Not just businesses. The grand majority of the difficulty in training AI is obtaining datasets large enough with enough orthogonal 'features' to ensure its efficacy is appropriate. Namely, this means that tasks like image generation, editing and recognition (huge for medical sector, including finding cancers and other problems), documentation creation (to your credit), speech recognition and translation (huge for the differently-abled community and for globe-trotters alike), and education (I read from huge public research data sets, public domain books and novels, etc) are still definitely feasible for consumer-grade usage and operation. There's also some really neat usages like federated tensorflow and distributed tensorflow which allows for, perhaps obviously, distributed computation opening the door for stronger models, run by anyone who will serve it.

I just do not see the point in admitting total defeat/failure for AI because some of the asshole greedy little pigs in the world are also monetizing/misusing the technology. The cat is out of the bag in my opinion, the best (not only) option forward, is to bolster consumer-grade implementations, encouraging things like self-hosting, local operation/execution, and creating minimally viable guidelines to protect consumers from each other. Seatbelts. Brakes. Legal recourse for those who harm others with said technology.

stevedidwhat_infosec ,

Look man I’m an adult, you may talk to me like one

I used the term consumer when discussing things from a business sense, ie we’re talking about big businesses and implementations of technology. It’s also in part due to the environment I live in.

You’ve also dodged my whole counter point to bring up a new point you could argue.

I think we’re done with this convo tbh. You’re moving goal posts and trying to muddy water

stevedidwhat_infosec ,

Instead of solely deleting content, what if authors had instead moved their content/answers to something self-owned? Can SO even claim ownership legally of the content on their site? Seems iffy in my own, ignorant take.

stevedidwhat_infosec ,

Well I suppose in that case, protesting via removal is fine IMO. I think the constructive, next-step would be to create a site where you, the user, own what you post. Does Reddit claim ownership over posts? I wonder what lemmy's "policies" are and if this would be a good grounds (here) to start building something better than what SO was doing.

stevedidwhat_infosec ,

So does that mean anyone is allowed to use said content for whatever purposes they'd like? That'd include AI stuff too I think? Interesting twist there, hadn't thought about it like this yet. Essentially posters would be agreeing to share that data/info publically. No different than someone learning how to code from looking at examples made by their professors or someone else doing the teaching/talking I suppose. Hmm.

stevedidwhat_infosec ,

A SO alternative cannot exist if a user who posted an answer owns it. That defeats the purpose of sharing your knowledge and answering questions as it would mean the person asking the question cannot use your answer.

Couldn't these owners dictate how their creations are used? If you don't own it, you don't even get a say.

stevedidwhat_infosec ,

I'm not sure I agree with your example - it's more like giving the owners of the donation the ability to choose WHO they are donating to. That means choosing not to donate to companies that might take your food donation and sell it as damaged goods for example. I wouldn't want my donation to be used that way. Thats how I see it anyway

Paedophiles create nude AI images of children to extort from them, says charity | Internet safety | The Guardian (www.theguardian.com)

Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

stevedidwhat_infosec ,

Pedophile apologists*

Nobody interested in the development of AI would be interested in defending pedos, unless they’re pedos. That’s reality.

Why lump the two groups together?

In fact, AI is used by these orgs to prevent workers from having to look at these images themselves which is partially why mod/admin/content filter people’s burnout is so high.

Everytime some nasty shit (pedo shit, gore, etc) is posted on tumblr, Facebook, Instagram, etc, those reports go through real people (or did prior to these AI models). Now imagine smaller, upcoming websites like lemmy instances that might not have the funds or don’t know of this AI solution.

AI fixes problems too - the root of the problem is cyber extortion. Whether that means the criminals are photoshopping or using AI. They’re targeting children for Christ sake, besides that being fucked up all by itself, it’s not hard to fool a child. AI or not. How criminals are contacting and blackmailing YOUR CHILDREN is the problem imo

stevedidwhat_infosec ,

Except you’re not trying to ask for seatbelts. You’re arguing we get rid of the cars.

Ai being the vessel for the problem which is cyber extortion.

You handle the extortion bit by making seatbelts. Not seatbelts that auto buckle. Not cars that don’t start without one. But by providing the safe guards to the people who can then make the decision to wear them and to punish those that put others at risk by their misuse.

You don’t ban alcohol because of alcoholics. You punish those who refuse to use them safely and appropriately and, most of all, those who put others at risk.

That’s freedom. That’s the American way. Not anything else.

stevedidwhat_infosec ,

No I don’t. You want me to think that because it makes it easier to be aggressive towards.

I’ve obviously misunderstood you, so I’m sorry about that. I should’ve led with questions instead of assumptions and that’s on me.

I think any mature adult who’s for AI, knows that some safeguards and changes are necessary- just like they are for any new invention

stevedidwhat_infosec ,

Hadn’t thought about it that way!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines