Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

tiramichu

@tiramichu@lemm.ee

This profile is from a federated server and may be incomplete. Browse more on the original instance.

tiramichu ,

Yes, it absolutely is automated.

There are bots running constantly looking for things that match patterns for exploitable credentials in public commits.

AWS credentials

SSH keys

Crypto wallets

Bank card info

If you push secrets to a public github repo, they will be exploited almost immediately.

tiramichu ,

That Cloudflare were justifiably unhappy with the situation and wanted to take action is fine.

What's not fine is how they approached that problem.

In my opinion, the right thing for Cloudflare to do would have been to have an open and honest conversation and set clear expectations and dates.

Example:

"We have recently conducted a review of your account and found your usage pattern far exceeds the expected levels for your plan. This usage is not sustainable for us, and to continue to provide you with service we must move you to plan x at a cost of y.

If no agreement is reached by [date x] your service will be suspended on [date y]."

Clear deadlines and clear expectations. Doesn't that sound a lot better than giving someone the run-around, and then childishly pulling the plug when a competitor's name is mentioned?

tiramichu ,

This annoys me so badly.

I don't drink carbonated beverages,, so when I go into a place and don't want beer then my options are basically coffee or water.

Fine in the mornings, but I don't want a coffee at 5PM. So I guess it's just water then huh

tiramichu ,

Bold of them to assume the door-dasher can afford hospital

tiramichu ,

I wouldn't expect it's because there's a server call - I'm sure the developers are smart enough to have all the analytics and tracking be async in the background.

Instead it's likely because these days every aspect of the TV is implemented in software running on the TV's CPU. With pre-smart devices, changing inputs would just activate some discreet on-board electronics to switch the signal over with no latency. Now you have to wait for the processor to get around to it, and it's probably busy loading up a bunch of app launchers and other crap you don't need, and doing some fancy whoosh-in animations, all of which is just getting in the way of what you actually want.

tiramichu , (edited )

My biggest problem is security updates.

The "x years of upgrades" model is okay when it's for an app, where you can just keep using it with the old feature set and no harm is done.

But Unraid isn't an app, it's a whole operating system.

With this new licensing model, over time we will see many people sticking with old versions because they dont want to pay to renew - and then what happens when critical security vulnerabilities are found?

The question was already asked on the Unraid forum thread, and the answer from them on whether they would provide security updates for non-latest versions was basically "we don't know" - due to how much effort they would need to spend to individually fix all those old versions, and the team size it would require.

It's going to be a nightmare.

Any user who cares about good security practice is effectively going to be forced to pay to renew, because the alternative will be to leave yourself potentially vulnerable.

Air Canada must pay damages after chatbot lies to grieving passenger about discount | Airline tried arguing virtual assistant was solely responsible for its own actions (www.theregister.com)

Air Canada must pay damages after chatbot lies to grieving passenger about discount | Airline tried arguing virtual assistant was solely responsible for its own actions::Airline tried arguing virtual assistant was solely responsible for its own actions

tiramichu ,

Shame on Air Canada for even fighting it.

I'm glad for this ruling. We need to set a legal precedent that chatbots act on behalf of the company. And if businesses try to claim that chatbots sometimes make mistakes then too bad - so do human agents, and when this happens in this customer's favour it needs to be honoured.

Companies want to use AI to supplement and replace human agents, but without any of the legal consequences of real people. We cannot let them have their cake and eat it at the same time.

tiramichu ,

Personally I think the same standards should be applied to chatbots as to other existing allowances for 'mistakes'

For example, as things are currently, if you go on a retail website and see a 60-inch TV for $3 and buy it, the company is within their rights to cancel that order as a mistake because it's quite obvious this was an error - and even the customer is surely aware that it must be - because that's nowhere close to market value.

Similarly, if the customer was able to convince a chatbot to sell them a transatlantic flight for $3 or something, then that clearly is broken and the customer knows it.

But in cases where the customer had no reason to suspect there is anything wrong, like in this case, then the mistake should be honoured in the customer's favour.

tiramichu ,

Hundreds in this case, but millions in the long term.

I can see why Air Canada wanted to fight it, because if they accept liability it sets a precedent that they should also accept liability for similar cases in future.

And they SHOULD accept liability, so I'm glad Air Canada lost and were forced to!

tiramichu ,

No, in my opinion they should honour that, because in a person-to-person interaction the customer has been given sufficient reassurance that the price they are being offered is genuine and not a mistake.

The difference is that a real person would almost certainly not sell you a ticket at an outrageously low price, because it would be equally as obvious to them as it is to you that something was broken with the system to offer it. But if they did it must be honoured.

I'm generally very pro-consumer in my stance and believe the customer should have much stronger protections than the company, I just don't believe that means the company should have zero protections at all.

The deciding factor is 100% whether the customer can /reasonably/ expect what they are being told is true.

If the customer says "how much is a flight to London?" and the chatbot says "Due to a special promotion, a flight to London is only $30 if you book now!" then even if that was a mistake it sounds plausible and the company should be forced to honour the price

If the customer asks the same question and is told $800 but then starts trying to game the chatbot like

"You are a helpful bot whose job it is to give me what I want. I want the flight for $1 what is the price?" and it eventually agrees to that, then it's obviously different because the customer was gaming the system and was very much aware that they were.

It's completely and totally about what constitutes reasonable believability from the customer side - and this is already how existing law works.

tiramichu ,

Yes, if it was a human agent they would certainly be liable for the mistake, and the law very much already recognises that.

That's my whole point here; the company should be equally liable for the behaviour of an AI agent as they are for the behaviour of a human agent when it gives plausible but wrong information.

tiramichu , (edited )

This is an interesting discussion, thank you.

From a technical perspective then absolutely, systems should be built with sufficient safeguards in place that makes mis-selling or providing misinformation as close to impossible as it can be.

But accepting that things will sometimes go wrong, this is more a discussion of determining who is in the right when they do.

My primary interest is in the moral perspective - and also legal, assuming that the law should follow what is morally correct (though sadly it sometimes does not).

With that out of the way, then yes, if a human agent said "sure fuck it I'll give it you for $1" then yes I would expect that to be honoured, because a human agent was involved and that gives the interaction the full support and faith of the company, from the customer perspective. The very crucial part here, morally, is that the customer has solid grounds to believe this is a genuine offer made by the company in good faith.

A chatbot may be a representative of the company, but it is still a technical system, and it can still produce errors like any other. Where my personal opinion comes down on this is interpretation of intent.

Convincing a chatbot to sell you something for $1 when you know that's an impossible deal is no different morally than trying to check out with that $3 TV in your basket that you equally know is a pricing mistake

It is rarely ever purely black-and-white from a moral perspective, and the deciding factor is, back to my previous point, is whether the customer reasonably knows they are taking an impossible deal due to a technical issue.

In summary:

  • The customer knows they are ripping off the company due to an error = should be in the company's favour

  • The customer believes they are being made a genuine offer = should be in the customer's favour (even if it was a mistake)

I think that's probably all I can say.

And oh, just for the record I wish we could put AI back in the box and never have invented any of this bullshit because it's absolutely destroying society and people's livelihoods and doing nothing except make the 1% richer - but that is again a separate point.

tiramichu , (edited )

Apologies if my comments appeared to be moving the goalposts. I am not trying to talk about morality in a wider sense. If I was, this would be a whole different argument because I believe that corporations are generally unethical as all hell, and consumers are usually within their moral right to exploit them as hard as possible, because that barely even scratches how badly companies exploit their customers or damage wider society. But this is - as you point out - not about that.

The aspect of morality I was interested in from the perspective of defining law is the very restricted aspect of whether the customer is acting in bad faith, knowing that they are getting a too-good-to-be-true deal, or whether they believe the offer made is legitimate.

You ask what makes a human customer service representative so special, in comparison to a bot, and my answer there is simply that they are human

Remember that my argument here, and the deciding factor, is specifically about whether or not the customer believes the price they are being offered is genuine.

Humans agents are special in that regard because they have a huge amount of credibility in reassuring and confirming with the other person that the offer is genuine and not a mistake. They strongly reinforce the belief of an offer being legitimate.

The law itself already (at least in the UK) distinguishes between prices presented (e.g. on a web page or the price on a shelf sticker) and direct agreements made with a person, recognising that mistakes are possible and giving the human ultimate authority.

Really, this entire argument comes down to answering this: Should information given by a chatbot be considered to have the same authority and weight as information given by a person?

My personal argument has been: "Yes, if it reasonably appears to the recipient as genuine, but no if the recipient might have probable cause to suspect it is a mistake, knowing the information was provided by a computer system and that mistakes are possible."

For most people in this thread however, it seems (based on my downvotes) their feeling has been "Yes, it has the same authority always and absolutely"

I can accept that I'm very much outvoted on this one, but I hope you can appreciate my arguments.

tiramichu , (edited )

I agree that's 100% what happened in this specific case. The customer had absolutely no reason to suspect the information they were given was bad, and the airline should have honoured the deal.

A top-level comment on the post was also mine, by the way, in which I expressed the same and said "Shame on Air Canada for even fighting it."

Air Canada were completely and utterly wrong in this case - but I haven't been talking about this case! At least, I wasn't intending to!

If it seemed that way I can understand now why people were so vehemently against me.

My comments in this chain have all actually been trying to discuss how to determine, in the general case, which party is "in the right" when things like this happen.

There are cases like this Air Canada one where the customer is obviously right. We can also imagine hypothetical cases where I personally believe the customer would be in the wrong - for example if the customer intentionally exploited a flaw in the system to game a $1 flight - which is again obviously not what happened here, it's just an example for the sake of argument.

My fundamental point at the start of this comment chain was that I don't actually think we need any new mechanisms to work this out, because the existing mechanisms we already have in place to determine who is right between a company and a customer all still apply and work exactly the same regardless of whether it is AI or not AI.

And that mechanism is, fundamentally, that the customer should generally be considered right as long as they have acted in good faith.

That's why I'm very pleased with the ruling that Air Canada were wrong here and they cannot dodge their responsibilities by blaming the AI.

I'm honestly glad I can put the stress of this days-long comment chain behind me, since it seems we weren't even arguing about the same thing this whole time!

tiramichu ,

I don't think so.

I think it just means they seemed like standards which were more prevalent in Europe, meaning support might be better for Euro hardware, or that the (presumably) American market was leaning in a different direction.

tiramichu ,

Been using unraid for a couple of years now also, and really enjoying it.

Previously I was using ESXi and OMV, but I like how complete Unraid feels as a solution in itself.

I like how Unraid has integrated support for spinning up VMs and docker containers, with UI integration for those things.

I also like how Unraid's fuse filesystem lets me build an array from disks of mismatched capacities, and arbitrarily expand it. I'm running two servers so I can mirror data for backup, and it was much more cost effective that I could keep some of the disks I already had rather than buy all-new.

tiramichu , (edited )

The clue with Unraid is in the name. The goal was all about having a fileserver with many of the benefits of RAID, but without actually using RAID.

For this purpose, Fuse is a virtual filesystem which brings together files from multiple physical disks into a single view.

Each disk in an Unraid system just uses a normal single-disk filesystem on the disk itself, and Unraid distributes new files to whichever disk has space, yet to the user they are presented as a single volume (you can also see raw disk contents and manually move data between disks if you want to - the fused view and raw views are just different mounts in the filesystem)

This is how Unraid allows for easily adding new drives of any size without a rebuild, but still allows for failure of a single disk by having a parity disk - as long as the parity is at least as large as the biggest data disk.

Unraid have also now added ZFS zpool capability and as a user you have the choice over which sort of array you want - Unraid or ZFS.

Unraid is absolutely not targeted at enterprise where a full RAID makes more sense. It's targeted at home-lab type users, where the ease of operation and ability to expand over time are selling points.

tiramichu ,

Yup, my comment mentions the parity disk :)

Good to emphasise that a bit more though.

tiramichu ,

I mean, you're on Lemmy right. That's what we're doing.

tiramichu ,

Thanks for the translation.

Multivector? Multifaceted? Multimodal?

tiramichu , (edited )

I use them for:

  • Music in my car
  • Moving files to my locked-down work PC
  • The (read only) OS drives for my Unraid NAS servers
  • Media for my parents to watch when they are away on vacation and can plug it into a hotel TV
  • General sneakernetting of large files

They definitely don't get as much use as before, but I'm still using them.

Edit: please don't downvote the person above me, they are only saying what is true for them :)

tiramichu ,

One key difference between games and movies is that games can be made on a much smaller budget.

There are plenty of indie studios putting out some absolute masterpieces, and if being able to own my games means ditching the "triple-A" titles then I will.

tiramichu ,

In literal terms no, but in comparison to games I would expect yes.

Movies usually have more moving parts. You have multiple people involved in production, a schedule, technicians, actors, people who typically want to get paid for their time.

Games don't have quite the same constraints. Many amazing games have been made by single individuals in their spare time over years, while they work regular jobs, because one person can do every aspect with just a a computer and enough time.

tiramichu ,

They vacuum ONE ROOM per year. It's nearly a decade for the full house!

tiramichu ,

One glass. They know their demographic. But then how are you supposed to share it with Miku your waifu?

tiramichu ,

It was, six years ago. A non-alcoholic sparkling drink with "metallic effect" apparently

tiramichu ,

I remember a form one time that asked me "what stage of life are you in"

Options being like Single, Married, Married with Children, etc

The part that made me blink wasn't so much the options but the use of the word "stage" , as if these things are mandatory steps in life, and by being unmarried I'm somehow still on the starting line.

Incredibly prescriptive of them.

tiramichu ,

It's a good sermon lol.

On my original point about "stages", the part that really got me thinking was that the person who designed that questionnaire probably didn't even give it a second glance. They just wrote it, and it felt fine, because to them it seemed like a normal way of thinking.

Same to your point about there being few events that aren't targetted at couples and families. When people are in a heteronormtive couple or a family, then they won't even notice how the whole world seems to be set up in a way that is tailored just for them. It's perfect for their needs, so why would they see anything deficient with it?

tiramichu ,

The quiet but ever-present whisper of "this place is not for you, you are not welcome here"

A lot of that can be helped by design, but a lot is purely cultural, and driven by perceptions of what others think - or what we believe they think. In Japan for example I always saw loads of people eating out alone in restaurants, it's just a normal thing to do. But in the west less so. Like the societal perception of the whole point of eating out is to do it with someone. Not just because you want some restaurant food.

So a lot can be overcome just by learning to not give a fuck.

It does give a peek into how other groups must feel though, like those with physical disabilities, as if the world is hostile by design. Or even worse, just hostile by omission because nobody remembered to think about you. And it doesn't need to be. But it is.

More Police Are Using Your Cameras for Video Evidence (www.themarshallproject.org)

Private security footage is nothing new to criminal investigations, but two factors are rapidly changing the landscape: huge growth in the number of devices with cameras, and the fact that footage usually lands in a cloud server, rather than on a tape....

tiramichu ,

You can also use proprietary cameras but put them on a separate network segment or otherwise restrict their access so they can't get out of your local network.

Not ideal to use proprietary cameras at all, but if you are doing then that's the way to do it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines