Mathematicians?

Is that like a junior arithmancer?

4 Likes

Speaking of Arithmancers, if anyone in here is a Harry Potter fan and interested in maths-based magic, I’d highly recommend the Arithmancer Trilogy by White squirrel. The core premise is that Hermione is a maths prodigy rather than a bookworm and she tests into Arithmancy during her first year at Hogwarts while keeping up with her maths education on the Muggle side. I’d also recommend the rest of White Squirrel’s works, but the Arithmancer Trilogy is their only complete, full-length fic(their other full length fic, the Accidental Animagus has been stalled in Year six for years, and all of their complete fics are either one-shots or stand-alone novellas). Part 1, THe Arithmancer covers Years 1-4, Part 2, Lady Archimedes covers Years 5-7, and Part 3, Annulls of Arithmancy covers about 20 years from the Post-Voldemort reconstruction of Wizarding Britain until the late 2010s.

1 Like

Again, sorry for double post, but I think I’ve figured out how to animate my generated graphics and so have a couple of animations to share.

The first is just a simple slice-by-slice animation of the RGB colorspace. If all went well, the first frame should be a 2-d gradient with black in one corner that fades to red and blue in the adjacent corners and magenta in the far corner, and then as it animates, black should turn to green, red to yellow, blue to cyan, and magenta to white before reversing.

The second is inspired by pascal’s triangle, specifically that the nth row of hte triangle(counting rows from zero) are the coefficients of the binomial expansion. So, it’s equation is basically:
r = b= (x+y)^z

where r and b are the red and blue channels respectively, x and y are the number of pixels from the center of the frame, and z is the frame number, counting from zero. The first frame should be a purple so dark as to be indistinguishable from black(010001 would be the hex value), but Ii have no idea what it would looke like from their. and of course, I use mod 256 to keep the values from getting too big and absolute values to keep things positive. For all I know, it could be a chaotic mess that triggers seizures if viewed by susceptible people on the wrong kind of display… and in retrospect, I probably should have kept the individual frames to upload in a folder… And like with the color cube animation, the second half is just a reversal… and just to illustrate how inefficient using ASCII-based portable pixel maps as the raw output of my code, while the final animated png is less than a megabyte, the ppms totalled 1.2 gigabytes.

3 Likes

:star_struck: Those are very pretty!!

Holy moly! That’s a lot. But as you said this is due to the text-based image file format.

And what it looks like:

Lines from the bottom left to the upper right. And the lines are repeatedly divided into subsections (which are lines, too). All lines are parallel. The visual effect looks chaotic on the first glance, but the more you look, the more patterns you see.

Btw I didn’t know there are animated PNGs. That’s cool because animated GIFs have low quality.

1 Like

I don’t think the original png standard allowed for animation, but apng has been around since before I went blind and allows for full 24-bit color, 8-bit alpha and strong lossless compression… and yet, most people still assume gif is the only image format that supports animation… heck, when I googled how to make apng, all the results where on how to animate gifs instead… Luckily the tool I used to merge the frames into the apngs I uploaded, apngasm is provided by a package of the same name(also found apngopt for optimizing animated pngs, but after using optipng(an optimizer for static png that can import ppm, bmp, static gif, the first frame of animated gif, and I think static Tiff/ the first image of a multi-image tiff file) with it’s strongest preset then animating them with apngasm, apngopt has no means of making the file smaller… Sadly, ppm has no support for an alpha channel, and I have no idea how to write code that reads from or writes to a binary file, much less how to write to anything specific. Still, kind of funny that less than 1600 characters of source code compiles to a 24KB binary that generates 1.2 GB of raw output files that then compress to less than a megabyte… Of course, in this specific case, it helps that, since R=B and G=0 for all pixel coordinates, there’s only 256 unique colors(all shades of Magenta), so the first thing optipng does is convert from 24-bit 3*8 RGB color down to an 8-bit palette… Hmm… there’s an idea, have all three color channels follow the same base equation, but have their z components out of phase… Or have red descending in z-value at the same time blue is ascending… Ugh, I need some kind of formula parser so I’m not having to constantly tweak source code, recompile and deal with a mess of user prompts and if-elses trying to combine code that produces one style of image with those that produce another into something that can produce variations of either.

1 Like

Ah, and our dear Mewtamer has reached this stage of an experimentation project, lol. Mood.

1 Like

I think the word you’re looking for is “physicist”.

2 Likes

Let me make it quite clear that Ulam was no café mathematician. Though well known for his physics work, he was a mathematical grandmaster.

Here’s another little piece in my series of math inspired images. This time, instead of treating the color channels separately, I’ve mapped the output of the generating function to a 1530 step color wheel(RYGCBm are the primaries and secondaries and it takes 255 stepsfrom a primary to its neighboring secondary). This time I went directly with the values of pascal’s triangle… so the first row/column of pixels should be all red with just the slightest bit of green, the second row/column should be a smooth gradient from red to yellow, and each subsequent should get further along the rainbow by taking progressively larger steps… unexpectantly, there are some black pixels in the resultant image, and adding a second output file that printes the coordinates, the value used to determine the color, and the value of pascal’s triangle at that position, I think it’s where the value of Pascal’s triangle is so big, it rolls over to negative values on a signed integer… and apparently a negative number modded by a positive value gives a negative, and my value to color code assumes positive value, so a negative results in the color channels staying at the zero values I initialize them as… do kind of wonder if the colored versus black forms a cool shape… and so wish I knew of an audio format where I could just write samples as ascii decimal text to a file I could then compress with a codec…

Anyways, the new image can be found at:

Well, I’m off to bed.

2 Likes

Exercise: assign colours to the entries of Pascal’s Triangle according to the values modulo a prime (7 works fine, for instance). A well-known, pretty pattern should appear. For bonus points, explain why that pattern appears.

If anyone is interested, a few days ago, I posted an addendum to my Mathematical Musing about what I’m calling “Sinoids” for lack of a better term, mostly trying to derive the exponent for a supercircle(e.g. a superellipse with equal major and minor axes) that would pass through the crest of a sine lobe that was positioned with it’s endpoints on the x and y-axes and it’s crest on the line y=x. Please let me know if anything isn’t clear, there are errors in my math, or you have suggestions on how to complete the derivation.

http://sightless-sanctuary.net/Mathematical%20Musings/sinoids.txt

Also, continuing on my mathy art, I’ve looked up how to calculate sine functions in C++ and convert floats to integers, and tried experimenting with finer grain than treating every pixel as an ordered pair.

http://sightless-sanctuary.net/Graphics/Programmatic/waves/

waves_old.png was an, probably ill-advised, attempt to treat coordinates as floats incremented in steps of 0.1, and is technically bugged output as optipng complained of extraneous data and I had to use the --fix argument to do the conversion. waves.png reverts to treating pixel coordinates as integers and instead divides the coordinates by 10 in the equation.

For this one, the base equation is z= sin(y + sin(x)), which I believe produces what you get if you extrude a sine wave along a second sine wave with an othrogonal time axis. I modified it by dividing both x and y by 10 since they are being incremented by integer steps and multiplied the whole thing by 128 to expand the range from -1:1 to -128:128. I then use FF00FF(Magenta) as the color for zero, cast the output to an integer, and if positive, I add the output to the green color channel, and if negative, I add it to both the red and blue channels. Since c++ uses radians by default and I’m deviding by 10, each interval of pi should be about 31 or 32 pixels. If I’m not mistaken, the overall effect should be a gradient that cycles from a ~50% shade of magenta to pure magenta to a ~50% tint of magenta and back vertically and with the gradient shifting up and down as you move horizontally, but I can’t confirm the output, and I’m worried that dividing x and y by 10 might not be enough enough points for the result to look smooth or that pi being only 31 or 32 pixels might be too small for the pattern to be apparent at full size… pluss, skimming the .ppm, there seemed to be long stretches of identical pixels and then large jumps when they change, adding to concerns this version might be blocky.

waves_colors.png is modified from what I got by feeding the raw ppm through sort and uniq. It confirms that waves.ppm indeed includes all 257 colors in the intended range. modified the raw output of sort and uniq to make it a proper .ppm and converted it to png because why not. Though, as 257 is a Fermat prime, it’s just a single file line of pixels, and while the shades are in proper order, the tints aren’t as I couldn’t figure out how to get sort to properly sort variable digit numbers, so instead of going 1, 2, 3, they are sorted 1, 10, 100… 128, 2, 20… 29, 3, 30…

Also, anyone know any linux command line tools that can evaluate several .cpp source files and make suggestions about what can be pulled out as functions? because with these images, I feel like I’m constantly copying one source file and reusing 90% of its code, and I really do want to generalize this stuff, and that much duplicated code suggests there’s room for converting stuff to functions, but I’m drawing a blank on how to do that in this case.

1 Like

Refactoring is one of those tasks where programming becomes more than just understanding the language. There might be a tool for this in C++, just because of how well-documented the language is now, but usually this is something where an engineer takes a “cleanup” day and refactors with strategic intent.

Recommended practices usually suggest doing this as often as you write code, for best results and maintainability.

2 Likes

I’m reading/listening to Tom Stoppard’s Arcadia, which is partially about Pierre de Fermat’s last theorem.

As I understand it, experts agree that Pierre de Fermat’s proof was incorrect. However, it’s debated whether he sincerely thought he had a proof, or whether he meant it as an ironic excuse.

(The first third of Arcadia seems to take the latter stance, unsurprisingly, through Thomasina’s proof, which suggests he wanted to torment future mathematicians.)

Despite the debate, I’ve never seen any speculation on exactly what Pierre de Fermat might have thought it was. Has anybody tried to recreate a simple but incorrect proof that he might have proposed? Or has anyone tried to compile gather evidence that he was joking?

This isn’t really a math question … more of a history question, I guess.

4 Likes

His attempts might have been an infinite descent argument, as he had used successfully elsewhere. Basically you show that if there was a solution to x^n+y^n=z^n for some n >= 3, then you can construct a solution for some m < n. Since this can be applied again, you get solutions for infinitely many positive integers m less than n. But there aren’t infinitely many m less than n, you couldn’t have started this sequence in the first place.

He may have thought he had such a construction, but was mistaken. I’m no historian though, so cannot back this conjecture with evidence.

I mean, I do have evidence, but this margin is… Just kidding.

4 Likes

That sort of makes sense, though I haven’t gone beyond high school math, and I haven’t done that for more than a decade. The important thing is that he could have conceivably had something in mind.

The “too narrow margin” thing sounds so much like a joke to me that I’ve always assumed that’s what it was. On further reading, I see that this is a second hand report and the original note was never found. I’m still surprised to see so little speculation elsewhere about his exact supposed proof. Anyway, thanks.

I mean, I do have evidence, but this margin is… Just kidding.

XD

1 Like

There has been a lot of in the academic literature. Nobody knows for sure, but some possibilities are much better than others. The most likely idea, by far, is that Fermat thought when he was young that the argument of his proof for fourth powers also worked for cubes (which is, to an extent, true), and that this generalized to all exponents. Sadly, the last bit is not correct. He himself must have realised this, because later in life he’s on the record stating the truth of the conjecture for exponents 3 and 4, but not the general statement. He never corrected his margin annotation, probably thinking nobody would take notice of it (if only!).

The same trap snared later mathematicians armed with more powerful algebraic weaponry: in mid-XIX century, Lamé and Cauchy announced that they had a proof of the general conjecture and published it. It was Kummer who realised that the proof was based in the false assumption that an algebraic property shared by integer ring extensions of low degree generalizes to all exponents: specifically, that all domains of the ring extension of the integers by exp(2iπ / p) are unique factorization domains (UFDs) for all values of the prime p. This is not correct: the first value that fails is p=23.

This is subtle, and was unexpected in 1850. Even proving that these ring extensions are UFDs for p<23 is not at all trivial: the proof that is valid for p=3 and p=5 doesn’t work for p=7 (although this value generates an UFD, but you have to prove this in a totally different way). Kummer was able to prove infinite cases of Fermat’s conjecture using this argument (the exponents for which Lamé and Cauchy’s argument is valid are called “regular primes”), but not for all. This was the first substantial advance towards a general proof, and it already used the machinery of field theory developed in the first half of the XIX century. Much more sophisticated theory and another 150 years were needed to generalize the right argument, which goes in a different direction.

For the really interested, here is Kummer’s main argument explained, which is spectacular on its own (requires only undergraduate algebra, but it is quite involved). The same paper includes a reformulation of the argument using the powerful machinery of Galois field theory.

7 Likes

Thanks! This is now way out of my depth but it should give me some background as I try to get through Arcadia.

I think I see where the play is going (ie. trying to parallel a proof that may not have existed with a poet who didn’t die in a duel implied by a margin note). I get the feeling I’m going to need CliffsNotes etc. to understand the whole thing though.

2 Likes