https://github.com/KerfuffleV2 — various random open source projects.

  • 2 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • As a general statement: No, I am not.

    You didn’t qualify what you said originally. It either has the capability or not: you said it didn’t, it actually does.

    You’re making an over specific scenario to make it true.

    Not really. It isn’t that far-fetched that a company would see an artist they’d like to use but also not want to pay that artist’s fees so they train an AI on the artist’s portfolio and can churn out very similar artwork. Training it on one or two images is obviously contrived, but a situation like what I just mentioned is very plausible.

    This entire counter argument is nothing more than being pedantic.

    So this isn’t true. What you said isn’t accurate with the literal interpretation and it doesn’t work with the more general interpretation either. The person higher in the thread called it stealing: in that case it wasn’t, but AI models do have the capability to do what most people would probably call “stealing” or infringing on the artist’s rights. I think recognizing that distinction is important.

    Furthermore, if I’m making such specific instructions to the AI, then I am the one who’s replicating the art.

    Yes, that’s kind of the point. A lot of people (me included) would be comfortable calling doing that sort of thing stealing or plagiarism. That’s why the company in OP took pains to say they weren’t doing that.


  • I just want fucking humans paid for their work

    That’s a problem whether or not we’re talking about AI.

    why do you tech nerds have to innovate new ways to lick the boots of capital every few years?

    That’s really not how it works. “Tech nerds” aren’t licking the boots of capitalists, capitalists just try to exploit any tech for maximum advantage. What are the tech nerds supposed to do, just stop all scientific and technological progress?

    why AI should own all of our work, for free, rights be damned,

    AI doesn’t “own your work” any more than a human artist who learned from it does. You don’t like the end result, but you also don’t seem to know how to come up with a coherent argument against the process of getting there. Like I mentioned, there are better arguments against it than “it’s stealing”, “it’s violating our rights” because those have some serious issues.


  • Artists who look at art are processing it in a relatable, human way.

    Yeah, sure. But there’s nothing that says “it’s not stealing if you do it in a relatable, human way”. Stealing doesn’t have anything to do with that.

    knowing that work is copyrighted and not available for someone else’s commercial project to develop an AI.

    And it is available for someone else’s commercial project to develop a human artist? Basically, the “an AI” part is still irrelevant to. If the works are out there where it’s possible to view them, then it’s possible for both humans and AIs to acquire them and use them for training. I don’t think “theft” is a good argument against it.

    But there are probably others. I can think of a few.


  • You can’t tell it to find art and plug it in.

    Kind of. The AI doesn’t go out and find/do anything, people include images in its training data though. So it’s the human that’s finding the art and plugging it in — most likely through automated processes that just scrape massive amounts of images and add them to the corpus used for training.

    It doesn’t have the capability to store or copy existing artworks. It only contains the matrix of vectors which contain concepts.

    Sorry, this is wrong. You definitely can train AI to produce works that are very nearly a direct copy. How “original” works created by the AI are is going to depend on the size of the corpus it got trained on. If you train the AI (or put a lot of weight on) training for just a couple works from one specific artist or something like that it’s going to output stuff that’s very similar. If you train the AI on 1,000,000 images from all different artists, the output isn’t really going to resemble any specific artist’s style or work.

    That’s why the company emphasized they weren’t training the AI to replicate a specific artist’s (or design company, etc) works.


  • Doubled down on the “yea were not gonna credit artist’s our AI stole from”. What a supreme douche

    I don’t think it’s as simple as all that. Artists look at other artists’ work when they’re learning, for ideas, for methods of doing stuff, etc. Good artists probably have looked at a ton of other artwork, they don’t just form their skills in a vacuum. Do they need to credit all the artists they “stole from”?

    In the article, the company made a point about not using AI models specifically trained on a smaller set of works (or some artist’s individual works). Doing something like that would be a lot easier to argue that it’s stealing: but the same would be true if a human artist carefully studied another person’s work and tried to emulate their style/ideas. I think there’s a difference between that an “learning” (or learning) for a large body of work and not emulating any specific artist, company, individual works, etc.

    Obviously it’s something that needs to be handled fairly carefully, but that can be true with human artists too.





  • One would hope that IBM’s selling a product that has a higher success rate than a coinflip

    Again, my point really doesn’t have anything to do with specific percentages. The point is that if some percentage of it is broken you aren’t going to know exactly which parts. Sure, some problems might be obvious but some might be very rare edge cases.

    If 99% of my program works, the remaining 1% might be enough to not only make the program useless but actively harmful.

    Evaluating which parts are broken is also not easy. I mean, if there was already someone who understood the whole system intimately and was an expert then you wouldn’t really need to rely on AI to port it.

    Anyway, I’m not saying it’s impossible, or necessary not going to be worth it. Just that it is not an easy thing to make successful as an overall benefit. Also, issues like “some 1 in 100,000 edge case didn’t get handle successfully” are very hard to quantify since you don’t really know about those problems in advance, they aren’t apparent, the effects can be subtle and occur much later.

    Kind of like burning petroleum. Free energy, sounds great! Just as long as you don’t count all side effects of extracting, refining and burning it.


  • So you might feed it your COBOL code and find it only coverts 40%.

    I’m afraid you’re completely missing my point.

    The system gives you a recommendation: that has a 50% chance of being correct.

    Let’s say the system recommends converting 40% of the code base.

    The system converts 40% of the code base. 50% of the converted result is correct.

    50% is a random number picked out of thin air. The point is that what you end up with has a good chance of being incorrect and all the problems I mentioned originally apply.



  • Even if it only converts half of the codebase, that’s still a huge improvement.

    The problem is it’ll convert 100% of the code base but (you hope) 50% of it will actually be correct. Which 50%? That’s left as an exercise to the reader. There’s no human, no plan, no logic necessarily to how it was converted also so it can be very difficult to understand code like that and you can’t ask the person who wrote why stuff is a certain way.

    Understanding large, complex codebases one didn’t write is a difficult task even under pretty ideal conditions.


  • This sounds no different than the static analysis tools we’ve had for COBOL for some time now.

    One difference is people might kind of understand how the static analysis tools we’ve had for some time now actually work. LLMs are basically a black box. You also can’t easily debug/fix a specific problem. The LLM produces wrong code in one particular case, what do you do? You can try performing fine tuning training with examples of the problem and what it should be but there’s no guarantee that won’t just change other stuff subtly and add a new issue for you to discovered at a future time.



  • It has to match the prompt and make as much sense as possible

    So it’s specifically designed to make as much sense as possible.

    and they should not be treated as ‘fact generating machines’.

    You can’t really “generate” facts, only recognize them. :) I know what you mean though and I generally agree. I’m really interested in LLM stuff but I definitely don’t really trust them (and no one should currently anyway).

    Why did this bot say that Hitler was a great leader? Because it was confused by some text that was fed into the model.

    Most people are (rightfully) very hesitant to say anything positive about Hitler but he did accomplish some fairly impressive stuff. As horrible as their means were, Nazi Germany also advanced since quite a bit also. I am not saying it was justified, justifiable or good, but by a not entirely unreasonable definition of “great” he could qualify.

    So I’d say it’s not really that it got confused, it’s that LLMs don’t understand being cautious about statements like that. I’d also say I prefer the LLM to “look” at stuff objectively and try to answer rather than responding to anything remotely questionable with “Sorry, Dave I can’t let you do that. There might be a sharp edge hidden somewhere and you could hurt yourself!” I hate being protected from myself without the ability to opt out.

    I think part of the issue here is because the output from LLMs looks like a human might have wrote it people tend to anthropomorphize the LLM. They ask it for its best recipe using the ingredients bleach, water and kumquat jam and then are shocked when it gives them a recipe for bleach kumquat sauce.




  • The graph actually looks like it’s saying the opposite. Fro most of the categories where there’s actually a decent span of time, it climbs rapidly and then slows down/levels off considerably. It makes sense also: when new technology is discovered, a breakthrough is made, a field opens up there’s going to be quite a bit of low-hanging fruit. So you get the initial step that wasn’t possible before and people scramble to participate. After a while though, incremental improvements get harder and harder to find and implement.

    I’m not expecting progress with AI to stop, I’m not even saying it won’t be “rapid” but I do think we’re going to progress for the LLM stuff slow down compared to the last year or so unless something crazy like the Singularity happens.


  • Because I don’t live in fantasy land where prepared food costs are exactly the same as raw food costs?

    Obviously it doesn’t. Either your time is so valuable that it’s clearly better to pay someone else to prepare stuff (which appears to be your position) or it’s not. The equation doesn’t change when we’re talking about 10 meals or 1 meal. You don’t seem to realize the inconsistency in your position.

    You don’t save “hundreds of dollars” by preparing one meal yourself, you might save a couple dollars at the expense of your time. Roasting some coffee is a roughly equivalent amount of effort to preparing one meal yourself and you probably save about the same amount of money. So if your time is so valuable that roasting coffee would be a ridiculous waste of your super valuable times, then if you were consistent this would also apply to meal prep.

    Yes, I agree with you that your entire argument doesn’t make sense.

    The “I know you are, but what am I?” turnaround seems a bit immature, don’t you think?