In short, I am posting 3 documents, all geared towards a solution for the P versus NP problem.

The mathematics that governs the official solution has been presented to, and was subsequently endorsed by two separate mathematics journal editors.

Now I am presenting the mathematics to the public, and I got the suggestion to post here.

The reasoning behind three documents, is one is the official proof, which is written for professional mathematicians, and is very brief concerning certain ideas, and may not be obvious to even those with a strong mathematical background.

The next, is a version official proof, that obviates every point made in the official proof, and is much more verbose. It's a complete proof, and intended for those with a very strong mathematical background. It could be called properly the extended version of the official proof.

The other document is a basic mathematical overview, that is intended for anyone to read, so they can understand the mathematics that governs the solution.

It is chalk full of new mathematics, and has gotten great reviews from the readers I have thus far had. It is intended to be very informative for anyone with the slightest semblance of a mathematical background, and who wants to understand the official proof, and why and how it resolves the problem.

Any reviews and comments from the present community will be read.

I hope every well intended person has a great day. Likewise, enjoys reading the articles.

Basic mathematical overview of proof:

Official proof:

Extended proof:


@carlostomas @freemo's certainly an approach worth looking at.

I've only looked at the overview of the proof, but there is one thing that occurs to me; and tht is that, when using the distance argument (i.e. showing that numbers have a common factor with each other, and that factor is not present in the number sought as a sum) it is *not* required that *all* the terms have the same factor.

To illustrate this, let us assume a set of N numbers, all of which are multiples of three (some may be negative). To this set of numbers, let us add the number 1, which is not a multiple of three. We now have a set of N+1 numbers, all but one of which are multiples of three; and there is no subset of these numbers which can possibly add up to 20. So we can use distance for identification of multiple subset sums even when some elements of the sum do not share a common factor.

If that common factor is 3, then we can have at most one number which does not share the common factor. However, for larger common factors, the number that we can have which do not share that factor becomes larger; with a common factor of 5, we can have at least three numbers in the set which do not share the common factor and yet still leave certain totals impossible to reach.

I'm not entirely sure how this applies to the subset problem in a general sense.

@ccc @freemo

Hi there! Thanks for reading the document.

I must say you are also reading very carefully, and I appreciate that, it is a great question, and more importantly it means :you're trying to understand.

What you have stated is perfectly right, and that is how distance is obviously used, just outright calculating the value of a sum of subsets, and seeing if that value equals some other value. there is no need for having common factors here.

What we are after is using distance, without dedicating a calculation per subset sum. As if we just calculated the value of each subset sum, then we are at a 1:1 ratio of calculations and subset sums, which gives an exponential in the case concerned. As there are exponentially many subset sums that need to be decided as 'not equal' with a value v.

We need a many:1 ratio, so some how we can reduce the calculations from an exponential to a polynomial (if it were possible).

The alone way to do such, utilizing the locating notion of distance, is if each of the subset sums has a common factors.

ex:(2,4,6,8,10,12,14,16,18) v=31, without any calculation, i can say every combination of these subsets summed cannot equal with 31, since they are all divisible by 2, and therefore will also have 2 as a common factor in there subset sum, and 31 does not.

That's almost 512 different combinations I can say are not equal to 31, without calculating each individually.

Its the bypassing calculating each subset sum that we are trying to see the possibility of. And the alone way to utilize distance, with a many:1 ratio, is finding common factors as in the above example.

I am really appreciative of your attention, if you have any more questions, please don't be hesitant, your question is a great question, and you gave a very coherent example to objectify your thoughts.

Many thanks again, I hope you have a nice day.

@carlostomas @freemo

I see what you're getting at there, yes; but my point was that you can do many:1 _without_ needing a constant factor in _all_ terms.

Consider the following set, for example: (1, 3, 6, 9, 12, -15, -18, 21, 24). You will notice that all of them are multiples of 3; _except_ the 1 at the beginning of the list. Therefore, any sum of any subset of this set will be either a multiple of three, or a multiple of three plus 1; in no circumstances can any subset of this set ever add up to 20 (which is a multiple of 3 plus 2). Thus, I am bypassing the calculation of each subset sum _without_ requiring that _each_ number have the _same_ common factor. And as the common factor amongst some of the set gets larger, the number of numbers that I can have without that common factor also increases...

@ccc @freemo

Its a wonderful, wonderful point.

and when considering subset sums of varying lengths, the idea holds quite well.

Consider subset sums of a fixed length. Then consider you also alone know the factorizations for very small lengths of the length, take a fixed length X, and we know the prime factors of lengths log(X).

What you're doing is combining the smaller lengths, and finding the factor of some larger length, a length that is >log(X), if that larger length does not occupy the entire sum, then each other length still needs a common factor, to factor the sum without determining the sum.

In the fixed length case, you have determined the same common factor each time, and each sums common factor (3k+1) else (3k).

Each subset sum, in your case has a common factor, as you're looking at one subset sum, the entire set as a sum, -after you combine the subset sums. and have determined its factor is (3k+1) else (3k). Which are not in your sought equation term.

In large cases, of numbers chosen without pattern, (lottery numbers of length 50, per say), such patterning will almost never happen. And if it did happen, the entire set would have to be occupied by the pattern, (a set of 49 values where you can factor as you showed, would still need a 50th term divisible by (3k), else (3k+1), hence a common factor in each subset sum, for an unsolved subset sum). A slight reiteration, one would be determining common factors for very large subset lengths, -with a subtle nuance, which is not possible in polynomial time.

The proof overview does not go into great detail because its content for the more advanced documents.

In cases where you can alone determine the common factors for very small subset sums, the probability is very low that these each have the same term. And if they don't each have a common factor, I believe you see where the limitation then arises.

Its a wonderfully crafted objection.
I really appreciate your careful, and coherent consideration, and if its not clear yet, I am happily willing to elaborate further. I still hope your having a nice day!

@ccc @freemo

A much shorter point, how do you determine from every subset sum being a multiple of 3, except one as a 1. that none of them were as 20?

Because in general that criterion will not hold for some
3k+1, a 3k, and a lottery value that ends in zero.

i.e take the value 30340

3(11013)+1 and 3(11013)
3(10113)+1 and 3(10113).

Without dedicated calculations for each set sum, you cannot just rule out the answer for any of them.

In actuality what you'll realize is that you actually did a lot more calculations, using both locating notions. to arrive at an expedited solution.

It's extremely great feedback, and I hope to get more like it! I hope you're still having a great Friday!

@ccc @freemo

After reviewing the idea for some time, I believe that the point is noteworthy, and I'm not sure what your intention was, but, it is a nice idea.

There are some other many to 1 reduction techniques that are not addressed due to the contents focus. It's obviously not a technique that can reduce the determination from exponential to polynomial, from the probabilities contained in the proofs.

Yet as you were able to point out the idea, I would hope you may consider reading the extended proof and giving your feedback on it. I can also tell you with likely more brevity, where such a technique would be addressed.

I don't believe any of the content in the extended proof would be out of reach for yourself. If you have any questions I would happily answer them

I really appreciate your feedback, and hope you are still having a nice day

@ccc @freemo

I very much appreciate your statement saying its a worthwhile approach to look at. I hope more will soon.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves. A STEM-oriented instance.

An inclusive free speech instance.
All cultures and opinions welcome.
Explicit hate speech and harassment strictly forbidden.
We federate with all servers: we don't block any servers.