• 0 Posts
  • 61 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • Operating System Concepts by Silberschatz, Galvin and Gagne is a classic OS textbook. Andrew Tanenbaum has some OS books too. I really liked his OS Design and Implementation book but I’m pretty sure that one is super outdated by now. I have not read his newer one but it is called Modern Operating Systems iirc.




  • myslsl@lemmy.worldtolinuxmemes@lemmy.worldHtop too
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    8 months ago

    If you have a fixed collection of processes to run on a single processor and unlimited time to schedule them in, you can always brute force all permutations of the processes and then pick whichever permutation maximizes and/or minimizes whatever property you like. The problem with this approach is that it has awful time complexity.

    Edit: There’s probably other subtle issues that can arise, like I/O interrupts and other weird events fwiw.


  • myslsl@lemmy.worldtoMemes@lemmy.mlImportant distinction
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    8 months ago

    I never said there was prove god doesn’t exist. And like I said, there doesn’t need to be as long as there is no documented sign whatsoever that points towards god actually existing.

    You also said: “A nonexistent almighty being”. Did you mean no gods exist, or did you mean all the gods people claim to exist so far have been debunked?

    More importantly, for the claim “no god exists” specifically, I disagree that no proof is required in general. There needs to be an actual proof as much as there needs to be a proof of the negation, that “a god exists”, for either to be worth accepting. If neither can be proved, why commit to believing the truth of either?

    Additionally, disproving particular examples doesn’t prove the general rule. Having no documented sign pointing to the existence of a god does not confirm the absence of a god anymore than having no documented signs of a gas leak in your home confirms the absence of a gas leak in your home. Perhaps the detector you are using is broken, perhaps the type of gas leaking in your home is not detectable by your detector.

    It would also be incredibly hard to design any kind of empirical test to confirm or disconfirm the existence of gods in general (not just the christian flavored ones).





  • myslsl@lemmy.worldtoMemes@lemmy.mlImportant distinction
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    8 months ago

    Why is it silly that the claim originally presented should have to present evidence first? The counter-claim only has zero burden of proof so long as the original claim has failed to give any proof of their own.

    That’s not what I’m claiming. I’m saying the claim AND counter-claim should provide evidence/proof before either one is accepted. Blindly believing not B because you can’t prove B is just as bad in my opinion as believing B itself with no proof.

    You wouldn’t have to present an argument yet, at that stage. I’d think you’re really dumb for needing something like that proven to you, but the initial burden of proof would still be on me. However, when I quickly and easily provide proof that 2 + 2 does equal 4, THEN the burden of proof falls to you to prove your counter-claim.

    A lack of evidence or proof for some claim B is not sufficient proof for not B. It doesn’t really matter what claim we assign to B here.

    For example, you might not have evidence/proof that it will rain today (i.e. B is the statement “it will rain today”), that doesn’t give you sufficient evidence/proof to now claim that it will not rain today. You just don’t know either way.


  • myslsl@lemmy.worldtoMemes@lemmy.mlImportant distinction
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    8 months ago

    This ties into the part you absolutely agreed with. The word “God” as it is defined now would not exist without the original unproven claims that God. Even if you’re not responding “God doesn’t exist” directly to someone who said “God exists”, you are if nothing else still responding to the original millennia old claim that they do exist. For that reason, it is always a counter-claim.

    If I say god doesn’t exist to you I feel like I’m making a true or false factual claim to YOU rather than to a bunch of old dead people or some greater historical/cultural context. The history of the word/definition might be relevant for deciding what the word means, but the claim is aimed at YOU. The actual truth status of the claim (even if we call it a counter-claim) that I might be making is either true or false (assuming we subscribe to bivalence like that) regardless of the history or culture that lead us to the discussion.

    As for what makes counter-claims different from regular claims, it’s simply that the burden of proof lies first with the original claim. A counter-claim has no responsibility to prove their claim until such time as the original claim presents evidence supporting itself

    It seems like a silly double standard for only one side to have a burden to prove their claim, but the other gets to claim the negation is true with no burden of proof.

    For example, if you say “2+2 is 4” and my response is “NO IT IS NOT. IT IS 3! I REFUSE TO PROVE IT THOUGH”, not only will I be wrong in a classical arithmetic sense but I have presented no argument for why you ought to believe my new counter claim to your original claim. It would make no sense to believe me without more info in such a case.

    The problem with that is I at least in theory could have looked up the tax code, remembered it, and then told you it correctly. Sure, I could have lied or remembered wrong, but it was 100% within my capacity to give you the accurate information, and even show you where I got the information from. With a claim about God’s existence, that’s impossible for either side of the debate as far as we know, and since the original claim was “God exists”, that side is, possibly forever, stuck holding the burden of proof.

    The fact that you can look up tax code is not really a problem for my hypothetical example. It is not particularly hard to come up with hypotheticals where you just can’t easily obtain the answer. We could rephrase the context, perhaps we are stranded on a desert island? We could rephrase the question, perhaps it is about what some obscure historical figure had in their pockets on the day they died?

    To be clear, I’m not trying to argue for or against the existence of god. My issue is that there should be a burden of proof for the CLAIMS “god exists” and “god does not exist” if somebody is claiming either is true. I don’t think there’s any kind of burden for believing some random claim without proof, but I think it’s silly to commit to the negation of a claim without proof unless you have a reason to believe the negation. You can always just not commit and say you don’t know in such a case, rather than believing the claim or its negation.


  • myslsl@lemmy.worldtoMemes@lemmy.mlImportant distinction
    link
    fedilink
    arrow-up
    1
    arrow-down
    7
    ·
    edit-2
    8 months ago

    It doesn’t. But, “God doesn’t exist” is not a claim, it is a counter-claim to the claim “God exists”.

    I’d agree that at least sometimes it is a counter claim, but I don’t agree that counter claims aren’t claims themselves. The wording “counter claim” seems to me to indicate that “counter claims” are just claims of a particular type?

    “God doesn’t exist” is surely a statement right? If I tell you “god doesn’t exist” (in response or not to something you’ve said), this feels like I am claiming the statement “god doesn’t exist” is true.

    The very concept of a higher power didn’t even exist until people started claiming without evidence that it did exist, and it’s been many branching games of telephone of that original unproven claim since then that has resulted in basically every major religion.

    I absolutely agree with you on this point.

    The counter-claim of “God doesn’t exist” needs no proof beause it is countering a claim that also has no proof. If and when the original multiple millenium old claim of “God exists” actually has some proof to back it up, then the counter-claim would need to either have actual proof as well to support it, or debunk the “evidence” if possible. But again, the original claim is literally thousands of years old and still has absolute bupkis to prove it, so… I’m not too worried.

    I don’t think we need proof to reject a claim like “god exists”. There’s no real good evidence for it and all attempts at proofs of this in the history of the philosophy of religion have been analyzed and critiqued to death in some pretty convincing ways.

    But, there is to me a difference between rejecting the truth of a claim vs excepting the truth of its denial. So, for example if you tell me tax code says X, that is not a proof of what tax code says. It would make sense for me to not outright believe you (since we are strangers), but you could be telling the truth, so it seems equally silly for me to immediately jump to believing tax code doesn’t say X too.



  • myslsl@lemmy.worldtoMemes@lemmy.mlImportant distinction
    link
    fedilink
    arrow-up
    4
    arrow-down
    20
    ·
    edit-2
    8 months ago

    No it doesn’t go both ways.

    If something exists it should be easy to prove. There should be some form of sign of it.

    This is absolutely not true. Things can exist without being accessible to you directly in a manner that makes it easy to prove their existence.

    On the other hand it is hard to disprove the existence of anything at all. How do we know there is not some teapot in outer space?

    Proving non-existence is not always hard. If we were arguing about the food in your fridge and I were claiming you had food in your fridge when you did not you could easily prove me wrong by just showing me the contents of your fridge.

    More importantly, why does the hardness of doing a thing give you special status to make claims without proof? Seems like you are artificially constructing rules here solely because they benefit your position.

    We can’t. But that is no reason to believe there is one.

    The universe is massive. There are teapots here. Why is it not plausible to believe some other alien race would not also construct some kind of teapot? Also, consider the fact that all teapots here on earth are literally teapots in “outerspace” in some sense.



  • myslsl@lemmy.worldtoMemes@lemmy.mlImportant distinction
    link
    fedilink
    arrow-up
    19
    arrow-down
    27
    ·
    8 months ago

    Despite millenia of disproven lies about a non existing almighty being, you still believe this being indeed does exist

    There is a whole area in Philosophy called Philosophy of Religion that would really like your disproof of the existence of such a being. They have atheists and theists alike.




  • I’m cherry picking, yet you cherry picked the sentence “I don’t really think I’m cherry picking” over the entirety of my previous comment to you?

    Virtually my whole last paragraph was ignored in my original comment.

    Did you not read the entire last paragraph of my first comment where I directly quoted and responded to the last paragraph of your original comment? Here, let me quote it for you. I see reading is not your strong suit.

    Quote I took from your last paragraph:

    But I do think it throws a wrench in other parts of math if we assume it’s universally true. Just like in programming languages… primarily float math that these types of issues crop up a lot, we don’t just assume that the 3.999999… is accurate, but rather that it intended 4 from the get-go, primarily because of the limits of the space we put the number in.

    My response:

    It definitely doesn’t throw a wrench into things in other parts of math (at least not in the sense of there being weird murky contradictions hiding in math due to something like this). Ieee floats just aren’t comparable. With ieee floats you always have some finite collection of bits representing some number. The arrangement is similar to how we do scientific notation, but with a few weird quirks (like offsets in the exponent for example) that make it kinda different. But there’s only finitely many different numbers that these kinds of standards can represent due to there only being finitely many bit patterns for your finite number of bits. The base 10 representation of a number does not have the same restriction on the number of digits you can use to represent numbers. When you write 0.999…, there aren’t just a lot (but finitely many) 9’s after the decimal point, there are infinitely many 9’s after the decimal point.

    In a programming context, once you start using floating point math you should avoid using direct equality at all and instead work within some particular error bound specified by what kind of accuracy your problem needs. You might be able to get away with equating 4.000001 and 4 in some contexts, but in other contexts the extra accuracy of 0.0000001 might be significant. Ignoring these kinds of distinctioms have historically been the cause of many weird and subtle bugs.

    Quote I took from your last paragraph:

    I have no reason to believe that this isn’t the case for our base10 numbering systems either.

    My response:

    The issue here is that you don’t understand functions, limits, base expansions of numbers or what the definition of notation like 0.999… actually is.

    But you keep doing you.

    Lmao, be sure to work on that reading comprehension problem of yours.

    What are you even expecting? How am I supposed to read your mind and respond to all the super important and deep points you think you’ve made by misunderstanding basic arithmetic and calculus? Maybe the responsibility is on you to raise those points if you want further response from me on them and not on me to somehow just magically know what you want?


  • You cannot use the outcome of a proof you’re validating as the evidence of the validating proof.

    You should read what I said more closely. If you read what I actually said (literally the very first paragraph), you’ll notice I told you what the proof of 0.999…=1 is.

    Let me fill in some of the details I left out for you. By definition, 0.999… IS the sum as n goes from 1 to infinity of 9/10^n. By definition this is the limit as N goes to infinity of the sum from n=1 to N of 9/10^n. The sum from n=1 to N can be evaluated (by the link in my original post) to be (9/10)(1-(1/10)^(N-1))/(1-1/10). So, from calculus we take the limit of this formula as N goes to infinity, it is (9/10)/(1-1/10), arithmetic tells us this value is 1. So, the limit of the sequence of partial sums we mentioned earlier is just 1, by definition this tells us 0.999…=1

    What I’ve just outlined to you is the “infinite series and sequences argument” shown here, it is equivalent to the “rigorous proof” argument they also give.

    You cannot use the outcome of a proof you’re validating as the evidence of the validating proof. Prove that the limits work without a presumption that 0.999… = 1. Evaluate a limit where there’s a hole in the function for 1… then prove that 0.999… also meets that hole without the initial claim that 0.999… = 1 since that’s the claim we’re testing.

    Your whole statement here is not an issue because:

    1. In my original comment I actually told you how the proof for 0.999…=1 works.
    2. I just outlined the proof for you again.
    3. I also sent you a link just now containing more explanations and proofs of this fact.

    So you you tell me I don’t understand things… when you’ve not provided proof of anything other than just espousing that 0.999… = 1.

    Again, the issue is you failing to see that I already told you the proof of this fact in my original post (and in the current post).

    And I know how to work with floats in a programming context. It’s the programming context that tells me that there could be a case where the BASE10 notation we use simply does “fit” the proper evaluation of what 1/3 is. Since you know… Base12 does. These are things I’ve actually already discussed… and have covered.

    I’m not sure if you meant to say the base 10 expansion of 1/3 does or doesn’t “fit” the “proper evaluation” of 1/3, but it does. Hint: try to apply my previous proof method to the series 3/10+3/100+3/1000+… to show this series evaluates to 1/3.

    The issue that you’re getting so mystified by here is really to do with divisibility. Ieee floats are irrelevant and arguably don’t even really describe the entire set of real numbers very well to begin with.

    It turns out that any rational number (i.e. a ratio of two integers) has a repeating decimal expansion no matter what base you pick (in some cases this expansion is not unique though fwiw). See here for an explanation of this. You might want to also read about Euclid’s division lemma as well.

    It’s just that the way the denominator of your rational number divides the base you choose determines the sort of pattern you see when computing the base expansion (specifically whether or not the denominator divides the base tells you when the base expansion can terminate or not).

    For example say we want to know the base 10 expansion of 1/2. To compute the first digit you can notice that since the base 10 expansion of 1/2 is given by 1/2=b_1/10+b_2/100+b_3/1000+… for each b_i being some integer between 0 and 9 (inclusive), that the integer part of 10(1/2), gives our first digit b_1, notice 10(1/2) is 5, so our first digit is 5. To compute our next digit consider 1/2-b_1/10=b_2/100+b_3/1000+…, this tells us the second digit of our base 10 expansion is the integer part of 100(1/2-b_1/10), but this value is just zero. If we keep repeating this process we keep getting zeroes. Notice we have a sequence of statements of the form 10(1/2), 100(1/2-b_1/10), 1000(1/2-b_1/10-b_2/100), … that we’re using to successively calculate out the actual values b_1, b_2, … and so on. Since 2 divided 10 we got b_1 to be equal to 5, which caused 100(1/2-b_1/10) to be equal to 0, so b_2 was zero, so 1000(1/2-b_1/10-b_2/100) ended up being equal to 1000(1/2-b_1/10), which is zero, so b_3 is zero and so on. The fact that 2 divides 10 causes a cascading sequence of zeroes after b_1=5 when we start actually trying to compute the digits of 1/2 in base 10.

    We can try the same trick for 1/2 in base 3 now. We know our base 3 expansion of 1/2 has the form 1/2=a_1/3+a_2/9+a_3/27+… (these denominators are increasing powers of 3) where our a_i’s are integers between 0 and 2 (inclusive). So, the integer part of 3(1/2) gives us our first digit a_1, but 2 doesn’t divide 3 cleanly, so we have to use Euclid’s lemma (i.e. division) to find the integer part of a_1, notice 3=2(1)+1, so 3/2=1+1/3, so our first digit is 1. Cool, so now we need to find our next digit, similar to before we see it is the integer part of 9(1/2-a_1/3)=9(1/2-1/3)=9/6=3/2, but this is just the same problem as before, so a_2=1 as well (which is what we expect). Continuing this process leads us to a sequence of 1’s for each digit in the base 3 expansion of 1/2.

    The fact that the decimal expansion for 1/2 terminates but the base 3 expansion doesn’t is due to 2 cleanly dividing 10 but not 3 in the above process. Notice also, that the general method I’ve outlined above (though not the most efficient) can be applied to any rational number and with any base that is a positive integer.

    But you’re cherry picking trying to make me look dumb when instead you’ve just added nothing to the conversation.

    I don’t really think I’m “cherry picking” or “adding nothing to the conversation”. You’re speaking from ignorance and I’m pointing out the points where you’re reasoning is going astray and how to resolve those issues. Rather than feeling dumb because you don’t know what you’re talking about, you should read what I said to try and see why it resolves the issues you’re struggling with.


  • myslsl@lemmy.worldtoLemmy Shitpost@lemmy.worldDoes .999… = 1?
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Limits don’t disprove this at all. In order to prove 0.999…=1 you need to first define what 0.999… even means. You typically define this as an infinite geometric series with terms 9/10, 9/100, 9/1000 and so on (so as the infinite sum 9/10+9/100+9/1000+…). By definition this is a limit of a sequence of partial sums, each partial sum is a finite geometric sum which you would typically rewrite in a convenient formula using properties of geometric sums and take the limit (see the link).

    The thing is that it follows from our definitions that 0.999… IS 1 (try and take the limit I mentioned), they are the same numbers. Not just really close, they are the same number.

    https://math15fun.com/2017/02/25/finding-limits-graphically/ If a limit exists… (such as the case in this link), -1 is a hole… but not -0.999999…

    What you’re saying here isn’t actually true because -0.999… and -1 are the same number. -0.9, -0.99, -0.999 and so on are not holes, but -0.999… is a hole, because it is the number -1.

    You see the distinction here? Notations -0.9, -0.99, -0.999 and so on are all defined in terms of finite sums. For example -0.999 is defined in terms of the decimal expansion -(9/10+9/100+9/1000). But -0.999… is defined in terms of an infinite series.

    The same sort of reasoning applies to your other decimal examples.

    It’s even more apparent in “weird” functions like the one outlined here… https://math.stackexchange.com/questions/3136135/limits-of-functions-with-holes-variables-vs-constants for x=1 the output is 2… but for x=0.99999… it’s 1.

    You take limits of functions. The first limit is the limit of a function f that, according to the diagram of the problem, approaches 1 as x goes to 1. But the second limit is the limit of a constant function that always maps elements of its domain to the value 2 (which is f(1)). You can show using the epsilon delta definition of the limit that such a limit will be equal to 2.

    The notation here might be a little misleading, but the intuition for it is not so bad. Imagine the graph of your constant function 2, it’s a horizontal line at y=2.

    But I think that it’s a matter of the origin of the 0.9999…

    This is correct. It follows directly from the definition of the notation 0.999… that 0.999…=1.

    I don’t think that 3/3 is ever actually 0.9999… but rather is just a “graphical glitch” of base 10 math. It doesn’t happen in base12 with 1/3, but 1/7 still does.

    Then you are wrong. 3/3 is 1, 0.999… is 1, these are all the same numbers. Just because the notation can be confusing doesn’t make it untrue. Once you learn the actual definitions for these notations and some basic facts about sums/series and limits you can prove for yourself that what I’m saying is the case.

    I do accept that we can just presume 0.999… can just be assumed 1 due to how common 3*(1/3) is.

    It’s not an assumption or presumption. It is typically proved in calculus or real analysis.

    But I do think it throws a wrench in other parts of math if we assume it’s universally true. Just like in programming languages… primarily float math that these types of issues crop up a lot, we don’t just assume that the 3.999999… is accurate, but rather that it intended 4 from the get-go, primarily because of the limits of the space we put the number in.

    It definitely doesn’t throw a wrench into things in other parts of math (at least not in the sense of there being weird murky contradictions hiding in math due to something like this). Ieee floats just aren’t comparable. With ieee floats you always have some finite collection of bits representing some number. The arrangement is similar to how we do scientific notation, but with a few weird quirks (like offsets in the exponent for example) that make it kinda different. But there’s only finitely many different numbers that these kinds of standards can represent due to there only being finitely many bit patterns for your finite number of bits. The base 10 representation of a number does not have the same restriction on the number of digits you can use to represent numbers. When you write 0.999…, there aren’t just a lot (but finitely many) 9’s after the decimal point, there are infinitely many 9’s after the decimal point.

    In a programming context, once you start using floating point math you should avoid using direct equality at all and instead work within some particular error bound specified by what kind of accuracy your problem needs. You might be able to get away with equating 4.000001 and 4 in some contexts, but in other contexts the extra accuracy of 0.0000001 might be significant. Ignoring these kinds of distinctioms have historically been the cause of many weird and subtle bugs.

    I have no reason to believe that this isn’t the case for our base10 numbering systems either.

    The issue here is that you don’t understand functions, limits, base expansions of numbers or what the definition of notation like 0.999… actually is.