@ccc

The overall manuscript hinges on being able to show there exists a set of subset sums, that is exponentially large, where each requires a dedicated calculation.

Which is why dedicated calculations are within the topic I was addressing.

I would be happy to show that such a calculation cannot reduce the number of subset sums needed to be determined from exponential to polynomial. I don't think its something I would include in the manuscript, as it follows quite naturally and obviously from what is already there. But it is a very sound point, and I appreciate yourself making it.

I wrote yourself earlier, and hope you'll consider reading the actual proofs. And trying to understand the approach.

I hope again your day goes great.

@ccc @freemo

After reviewing the idea for some time, I believe that the point is noteworthy, and I'm not sure what your intention was, but, it is a nice idea.

There are some other many to 1 reduction techniques that are not addressed due to the contents focus. It's obviously not a technique that can reduce the determination from exponential to polynomial, from the probabilities contained in the proofs.

Yet as you were able to point out the idea, I would hope you may consider reading the extended proof and giving your feedback on it. I can also tell you with likely more brevity, where such a technique would be addressed.

I don't believe any of the content in the extended proof would be out of reach for yourself. If you have any questions I would happily answer them

I really appreciate your feedback, and hope you are still having a nice day

@ccc @freemo

A much shorter point, how do you determine from every subset sum being a multiple of 3, except one as a 1. that none of them were as 20?

Because in general that criterion will not hold for some
3k+1, a 3k, and a lottery value that ends in zero.

i.e take the value 30340

3(11013)+1 and 3(11013)
3(10113)+1 and 3(10113).

Without dedicated calculations for each set sum, you cannot just rule out the answer for any of them.

In actuality what you'll realize is that you actually did a lot more calculations, using both locating notions. to arrive at an expedited solution.

It's extremely great feedback, and I hope to get more like it! I hope you're still having a great Friday!

@ccc @freemo

I very much appreciate your statement saying its a worthwhile approach to look at. I hope more will soon.

@ccc @freemo

Its a wonderful, wonderful point.

and when considering subset sums of varying lengths, the idea holds quite well.

Consider subset sums of a fixed length. Then consider you also alone know the factorizations for very small lengths of the length, take a fixed length X, and we know the prime factors of lengths log(X).

What you're doing is combining the smaller lengths, and finding the factor of some larger length, a length that is >log(X), if that larger length does not occupy the entire sum, then each other length still needs a common factor, to factor the sum without determining the sum.

In the fixed length case, you have determined the same common factor each time, and each sums common factor (3k+1) else (3k).

Each subset sum, in your case has a common factor, as you're looking at one subset sum, the entire set as a sum, -after you combine the subset sums. and have determined its factor is (3k+1) else (3k). Which are not in your sought equation term.

In large cases, of numbers chosen without pattern, (lottery numbers of length 50, per say), such patterning will almost never happen. And if it did happen, the entire set would have to be occupied by the pattern, (a set of 49 values where you can factor as you showed, would still need a 50th term divisible by (3k), else (3k+1), hence a common factor in each subset sum, for an unsolved subset sum). A slight reiteration, one would be determining common factors for very large subset lengths, -with a subtle nuance, which is not possible in polynomial time.

The proof overview does not go into great detail because its content for the more advanced documents.

In cases where you can alone determine the common factors for very small subset sums, the probability is very low that these each have the same term. And if they don't each have a common factor, I believe you see where the limitation then arises.

Its a wonderfully crafted objection.
I really appreciate your careful, and coherent consideration, and if its not clear yet, I am happily willing to elaborate further. I still hope your having a nice day!

@ccc @freemo

Hi there! Thanks for reading the document.

I must say you are also reading very carefully, and I appreciate that, it is a great question, and more importantly it means :you're trying to understand.

What you have stated is perfectly right, and that is how distance is obviously used, just outright calculating the value of a sum of subsets, and seeing if that value equals some other value. there is no need for having common factors here.

What we are after is using distance, without dedicating a calculation per subset sum. As if we just calculated the value of each subset sum, then we are at a 1:1 ratio of calculations and subset sums, which gives an exponential in the case concerned. As there are exponentially many subset sums that need to be decided as 'not equal' with a value v.

We need a many:1 ratio, so some how we can reduce the calculations from an exponential to a polynomial (if it were possible).

The alone way to do such, utilizing the locating notion of distance, is if each of the subset sums has a common factors.

ex:(2,4,6,8,10,12,14,16,18) v=31, without any calculation, i can say every combination of these subsets summed cannot equal with 31, since they are all divisible by 2, and therefore will also have 2 as a common factor in there subset sum, and 31 does not.

That's almost 512 different combinations I can say are not equal to 31, without calculating each individually.

Its the bypassing calculating each subset sum that we are trying to see the possibility of. And the alone way to utilize distance, with a many:1 ratio, is finding common factors as in the above example.

I am really appreciative of your attention, if you have any more questions, please don't be hesitant, your question is a great question, and you gave a very coherent example to objectify your thoughts.

Many thanks again, I hope you have a nice day.

@urnerk @freemo @onan @math

I am very glad you are enjoying the playing cards metaphors, they are a very useful math handle when addressing the mechanics of quantifying the lengths of procedures that accomplish fundamental tasks (such as identifying the values of playing cards). And for the obviate usage noting, its unobvious that it means to anticipate and prevent in it's normal English meaning, I look forward to reading the English history, as it seems not blatantly intuitive. It should be informative. I appreciate your input, and your reading of the documents very much. I hope you have a nice day.

@onan @freemo @urnerk @math

I appreciate the promotion, and recommendations. The math has already been endorsed by a pair of mathematics journal editors. Its now presenting the findings to the at large community. Thanks again.

@freemo @math

A really interesting one is the probability a pair of numbers not having a common factor as:

pi^2/6

A.k.a a pair of numbers being relatively prime.

It shows the persistence of the natural existence of the number pi (3.14...), in yet another natural artifact.

Greetings!

In short, I am posting 3 documents, all geared towards a solution for the P versus NP problem.

The mathematics that governs the official solution has been presented to, and was subsequently endorsed by two separate mathematics journal editors.

Now I am presenting the mathematics to the public, and I got the suggestion to post here.

The reasoning behind three documents, is one is the official proof, which is written for professional mathematicians, and is very brief concerning certain ideas, and may not be obvious to even those with a strong mathematical background.

The next, is a version official proof, that obviates every point made in the official proof, and is much more verbose. It's a complete proof, and intended for those with a very strong mathematical background. It could be called properly the extended version of the official proof.

The other document is a basic mathematical overview, that is intended for anyone to read, so they can understand the mathematics that governs the solution.

It is chalk full of new mathematics, and has gotten great reviews from the readers I have thus far had. It is intended to be very informative for anyone with the slightest semblance of a mathematical background, and who wants to understand the official proof, and why and how it resolves the problem.

Any reviews and comments from the present community will be read.

I hope every well intended person has a great day. Likewise, enjoys reading the articles.

Basic mathematical overview of proof:

drive.google.com/file/d/1Y-GZK

Official proof:
drive.google.com/file/d/1Q_LxH

Extended proof:
drive.google.com/file/d/1lhAIL

@freemo

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.