Why we can not divide numbers by zero?

In school I was taught that numbers can not be divided by zero. But now, I think that they can and answer of such operation must be zero. For example, if we have three apples and divide them on two tables we get 1.5 apples on each table and if we want to divide those apples on 0 table we must have zero apples on that zero table (in reality you can not put something in nothing, but you can think about that). That is my logic. What you think about this. Of course there must be some theorem by some ancient guy which sais why I can not divide by zero, but I do not know such theorems, and I can do that. Good for me… :slight_smile:

1 Like

Thanks. It is interesting reading…
Anyway, I only state that mathematicians are such formalists, that they think that if they want something to be true, they defining that as true, that must be true, they think.
But true is that zero is not the same number as 1,2,3… zero is a fictional number, as also infinity. Actualy no such numbers exist in real universe. Zero and infinity are abstractions, they not real and to them do not apply rules of any real numbers. Actualy you can not make any matematical actions with zero, becouse zero means "no numbers here (left)” (why matematicians think that zero is not defined?). That also means that in mathematical operations zero can be only as start point or finish point of any calculations.

Mathematicians can decide anything to be “true”. They just work from axioms (things you decide are “true”) and then try to deduce theorems to see what you can do. By changing the axioms, you define a new (mathematic) world.

Now, only some sets of axioms make sense because they happen to match some part of our reality, and maths based on such axioms can be used to do physics, engineering, or anything else that requires computations.

Mathematicians only rarely create new sets of axioms just for fun. Usually it is physicians or engineer that require them to do so. For example, electronics and in particular signal processing works a lot with the “impulse response” of a system, which is how the system would behave if you feed it an infinitely powerful, but infinitely short pulse at the input. This is not well modellable by an usual function, so it led to a whole new field of mathematics: theory of distributions.

Likewise in geometry: for a long time, people though that one of the fundamental axioms of geometry (the sum of 3 angles of a triangle is always 180°) was provable from the others (definitions of parallel lines, etc). However, it turns out you can build a geometry where other axioms still work, but not this one. This happens for example if you draw your triangle on the surface of a sphere. And it led to non-euclidean geometries.

So, changing the definition of division by zero just because it seems unpractical is not how it works. By doing so, you are defining a new set of axioms (since you did not prove anything). Possibly your new axiom (“anything divided by 0 = 0”) conflicts with an existing one. Possibly it forces people to add special cases in many, many other places in maths to handle it properly. I think it’s better to remember that dividing by zero “doesn’t make sense”, unless you effectively need it to work for the special problem you are working on, and in that case it is no problem to define your own mathematics, but use them at your own risk!

Ok, let’s talk further. What would happen with computer calculations if division by zero will be resulting in zero?

Nothing, because divison by zero is forbidden, so it is not used anywhere. Processors generate an exception when it happens, so working programs that do so do not exist.

Remember that computers are just tools that let us count and compute faster. If something is not used in real world (by that I mean papers, theories, applications, etc.), it’s not used in computer world.

With the following assumptions:

0Ă—1=0
0Ă—2=0

The following must be true:

0Ă—1=0Ă—2

Dividing by zero gives:

(0/0)Ă—1=(0/0)Ă—2

1=2

– this is wrong, there is wrong sequence of operations (there exeption-shortcut in the matematical operations ruls was inadequate), and dividing by zero must be written as (if 0/0=0 and N/0=0):

  1. (0Ă—1)/0=(0Ă—2)/0
  2. (0/0)Ă—(1/0)=(0/0)Ă—(2/0) [sorry, this not right]
  3. 0Ă—0=0Ă—0
  4. 0=0

What is true.

And with
x=x² → x-1=x²-1 → (x-1)/(x-1)=(x²-1)/(x-1),
when x=1

– is the same problem with wrong sequence of matematical operations:

with (x-1)/(x-1) and x=1, we must get 0/0 not 1. And a result must be 0=0.

What is also true.

Wikipedia is wrong.

I will give you the first one, but the second one shows exactly why division by 0 cannot be equal to 0.

No. You’re mandating a specific sequence to avoid inconsistency, which is a special case PulkoMandy was talking about.

Why? What mandates doing that? (x-1)/(x-1) is 1. Unless you want to say that 2/2=0 too. And 3/3=0. And x/x=0. That breaks multiplication though, because 0*x=x is obviously not true.
Only with this rule you’re introdcing (x-1)/(x-1) could also be 0 (in one specific case of x=1). That’s why this new rule is wrong.

No, you want this rule to be right, so you see what you want to see.

Oh no! operations in “( )” must do first and “x” is not a number, as “0” also. And as I said “0” in calculations can be only as input or output point (by definition of zero, see above). And, yes there is particular sequence of doing matematical operations, and I only little fix-explain it.
… “(x-1)/(x-1) is 1” – not allways, some times it is zero (when x=1).

Except variable evaluation is not a mathematical operation. I can evaluate whenever I want (“never” also being an option).

You have been warned about creating your own mathematcics full of special-cases that may or may not be useful to solve practical problems. This can be fun to explore, but probably not very useful.

As to what would happen if computers allowed this: some languages require that dividing by zero raises an exception, it warns people that something went wrong.

Let’s take a different example: I currently work on a project where there is RAM available at address 0. As a result, using NULL pointers do not crash. It never was useful to me, however I spent countless hours debugging problems because someone used a NULL pointer without checking, and the program would still crash, not immediately, but some time later for non-obvious reasons. This is a similar example of a non-helpful change. I much prefer to be warned when I do something that results in “nonsense” or “undefined”, rather than the computer doing “something”, and letting me puzzled about the results.

Let’s take another example in mathematics. You are trying to compute the mean of N samples. You use the usual formula for arithmetic mean: (a+b+c+…)/N.

In the special case of N=0, your result is not 0. Your result is that there are no samples and there is no mean to compute. So, in this case it makes sense that the result of your operation is “not a number”. Making it 0 introduces a risk that the 0 value would sneak into the next computations you do with it, and skew your results (maybe you computed the mean of samples a thousands times, and then you want to do something else with the results).

It seems that it is building a tool to shoot yourself in the foot (like NULL pointers), but additionally it is still unclear which other problems the tool would solve. And a tool that is only useful to shoot yourself in the foot, no one would want to use it, right?

1 Like

Off topic. But Zero is actually an even number. Its even (pun intended) the most even number we have.

1 Like

Hello, I am mathematician.

Historically speaking, the number 0 was introduced really recently. There were already infinity symbol and imaginary numbers but not also 0.

So, let us talk pragmatically: What we want to achieve versus what we loose in terms of universality.
If you want an arithmetics (numeric calculus) system capable of:

  1. Addition, which is:
    • Symmetric: a + b = b +a
    • Associative: a + (b + c) = (a + b) + c = a + b + c
    • Contains neutral element 0 with property: 0 + a = a, for any a
    • Complete: for any number a, there exists a number (-a) such that (-a) + a = 0
  2. Multiplication, which is:
  • Symmetric: a * b = b * a (there are algebras where this is not required, and actually is not true)
  • Associative: a * (b * c) = (a * b) * c = a * b * c
  • Contains neutral element 1 with property: 1 * a = a, for any a, if multiplication is not symmetric, also require a * 1 = a (if it is, this equality results from previous).
  • Almost complete: for any number a, different from 0, there exist an inverse a^(-1) such that a^(-1) * a = 1
  1. There exists distributive rule between addition and multiplication: (a + b) * c = a * c + b * c. If multiplication is not symmetric, also we need to require that a * (b + c) = a * b + a * c (if multiplication is symmetric, it results from the previous equality).

This system was designed to model the numbers we use every day. But this implies a limitation that multiplication inevitable is “almost complete”. When I say “implies”, I do not speak about the will of some particular mathematician. The very system becomes contradictory by construction, and this can be proved from the above text. Really, for any number a, the following is always true:
a * 0 = a * (-b + b) = a * (-b) + a * b = - a * b + a * b = 0.
Moreover, if we require multiplication completeness, that is:
x / y = z => x = y * z, for any x, y, z,
even for x different from 0 and y = 0, then put x = a, y = z = 0 in previous equality and obtain:
a = 0 / 0 for any a.
So all numbers will be equal among them.

We cannot just exclude 0 from numbers, because it is neutral element of addition (but creates limitation in multiplication). So by trying to make multiplication absolutely complete, we must give up the neutral element of addition. By giving up this element, we also give up the completeness of addition (and there is no possibility to “almost complete”). So, we gain 1 point and loose 2 points, one of which is already broader than assumed gain (give up completeness of addition by almost completeness of multiplication).

If you want a special treatment of some numbers, then you give up the universality, which makes the mathematics so useful. Namely, all the above is expressed using letters instead of actual numbers, because of “for any number” and “there always exist”. With special treatment of some numbers, you cannot speak so concise.

4 Likes

alpopa, that is a really nice description. I am glad we have mathematicians. Damoklas, if you want to experiment with alternative systems, you may want to look at symbolic logic. With certain programming languages such as LISP, PICAT, or Mathematica, you can more easily write your own systems. The first symbolic math system, Macsyma, was written in LISP and would symbolically process formulas like we were taught in school.

everything is possible…
you just have to believe…

AndrewZ, glad to see mathematicians are still good for something. I found Haskell a good tool for this kind of experiments.

brunobastardi, St. apostle Paul, whose belief is remarkable, wrote “All things are lawful for me; but not all things are advantageous.All things are lawful for me; but I will not let myself be brought under authority by anything” - 1 Corinthians 6:12.

1 Like

This is an interesting video on the topic.