Happy blogging!
Many programers consider python vs java as a battle of easeofprogramming vs speed…
Does pypy offer python programs the speed of statically typed languages?
Let’s throw a couple programs at it and see what happens:
I used the last version of a program built up in a previous article:
Python 3/2.7 :
from __future__ import division,print_function import time t0=time.time() prime_list = [] maxx = 10**7 for n in range(2, maxx+1): isprime = True for p in prime_list: if n % p is 0: isprime = False break if isprime == True: prime_list.append(n) def smallestPrimeFactor(n): for p in prime_list: if n % p == 0: return p prime_count={p : 0 for p in prime_list} for n in range(2, maxx + 1): prime_count[smallestPrimeFactor(n)] += 1 keys = [] values = [] in_sets = [] out_sets = [] for p in sorted(prime_count): keys.append([p]) values.append(prime_count[p]) while len(keys) > 1: min1_value = min(values) min1_index = values.index(min1_value) min1_key = keys[min1_index] del values[min1_index] del keys[min1_index] min2_value = min(values) min2_index = values.index(min2_value) min2_key = keys[min2_index] del keys[min2_index] del values[min2_index] in_sets.append(set(min1_key + min2_key)) out_sets.append(set(min1_key)) keys.append(min1_key + min2_key) values.append(min1_value + min2_value) #now that we have a partition answerkey (mapping of in_sets to out_sets) #let's increment through the entire range of integers and compute the algorithm's performance print("done with partition answerkey, time elapsed =", int(1000*(time.time()t0)), "ms") question_history = [] for n in range(2, maxx + 1): prime_subset = set(prime_list) question_count = 0 while True: question_count += 1 checkset = out_sets[in_sets.index(prime_subset)] if smallestPrimeFactor(n) in checkset: prime_subset = checkset else: prime_subset = prime_subsetcheckset if len(prime_subset) == 1: #print( "number =", n, " smallest factor (>1) =", list(prime_subset)[0], " questions asked =", question_count) question_history.append(question_count) break print( "worst case =", max(question_history), "questions" ) print( "best case =", min(question_history), "questions" ) print( "average case =", sum(question_history), "/", (maxx1), "~=",sum(question_history)/(maxx1), "questions" ) print( "time elapsed =", int((time.time()  t0)*1000)/1000,"secs")
The Results with a small N (maxx = 10^4):
$ python3 V Python 3.2.2 $ python3 primes.py done with partition answerkey, time elapsed = 388 ms worst case = 13 questions best case = 1 questions average case = 37473 / 9999 ~= 3.747674767476748 questions time elapsed = 5.21 secs $ $ python V Python 2.7.2 $ python primes.py done with partition answerkey, time elapsed = 317 ms worst case = 13 questions best case = 1 questions average case = 37473 / 9999 ~= 3.74767476748 questions time elapsed = 4.477 secs $ $ ./pypy1.7/bin/pypy V Python 2.7.1 (7773f8fc4223, Nov 18 2011, 22:15:49) [PyPy 1.7.0 with GCC 4.0.1] $ ./pypy1.7/bin/pypy primes.py done with partition answerkey, time elapsed = 6043 ms worst case = 14 questions best case = 1 questions average case = 37474 / 9999 ~= 3.74777477748 questions time elapsed = 43.68 secs
Forgetting about performance for a moment, pypy yields a different result!
If it can’t imitate Python, in a blackbox sense… then it’s a bit of a failure in my eyes.
Just for a sanity check I also installed pypy1.7 on a windows box:
PS C:> .pypy1.7pypy V Python 2.7.1 (930f0bc4125a, Nov 27 2011, 11:58:57) [PyPy 1.7.0 with MSC v.1500 32 bit] PS C:> .pypy1.7pypy primes.py done with partition answerkey, time elapsed = 3628 ms worst case = 14 questions best case = 1 questions average case = 37474 / 9999 ~= 3.74777477748 questions time elapsed = 26.488 secs
failed again, pypy1.7 is the current shipping/stable release, let’s try the linux version:
# pypy V Python 2.7.1 (?, Nov 22 2011, 08:37:28) [PyPy 1.7.0 with GCC 4.6.2] # pypy primes.py done with partition answerkey, time elapsed = 2951 ms worst case = 14 questions best case = 1 questions average case = 37474 / 9999 ~= 3.74777477748 questions time elapsed = 20.856 secs
again a failure, but before wiping pypy from my mind, I decided to check a nightly build to see if it’s also broken:
$ ./pypynightly/bin/pypy V Python 2.7.1 (e3fbdc682ed9, Jan 25 2012, 02:00:17) [PyPy 1.8.1dev0 with GCC 4.2.1] $ ./pypynightly/bin/pypy primes.py done with partition answerkey, time elapsed = 218 ms worst case = 13 questions best case = 1 questions average case = 37473 / 9999 ~= 3.74767476748 questions time elapsed = 6.486 secs $
Success, it looks as if the latest nightly can at least imitate the functionality of normal python on this program.
[Carl Friedrich Bolz pointed out the issue in the comments], but in short It’s not really pypy’s fault, it may not imitate normal python properly, but it was behavior that normal python shouldn’t have, or more exactly: A programmer shouldn’t rely on any pythons having:
using ‘is’ as if it was ‘==’, even for “0 is 0″… see comments for details…
Looking at speed, It’s disappointing… While it is faster with the first part of the program, It doesn’t impress on the whole.. Let’s give it a larger workload and see what happens. (I’m including comparisons to an analogous java program from the same article)
python3  python2  pypynightly  pypy1.7  java  
lines of code  64  64  64  64  121 
chars of code  2086  2086  2086  2086  3980 
10^5:  
part 1: time (secs)  18.3  11.0  4.0  3.1  1.3 
total time (secs)  368  329  391  277  136 
10^6:  
part 1: time (secs)  1234.0  1060.8  262.4  232.4  91.3 
total time (secs)  38877  39107  42273  33011  18285 
Well, pypy1.7 is very fast at the first part of the program… closer to java speed than CPython. For the entire duration of the program, pypy1.7 is a bit faster than Cpython, but not in a league with java.
Another test is with the code from this puzzle.
for your convenience, here’s the code:
import time def incementlistx(listb7): idx = 0 go = True listb7[idx]+=1 while go: go=False if listb7[idx]==7: if idx == 0: listb7[idx]=1 else: listb7[idx]=0 if len(listb7) 5001008001005: keep=False break
python3  python2  pypy1.7  pypynightly  
time (secs)  5594  3257  604  698 
It’s faster here, significantly so. This is what I expected with pypy from the start.
versions (unless otherwise specified):
python3: 3.2.2 (default, Nov 21 2011, 16:50:59) [GCC 4.6.2] python2: 2.7.2 (default, Nov 21 2011, 17:25:27) [GCC 4.6.2] pypy1.7: Python 2.7.1 (?, Nov 22 2011, 08:37:28) [PyPy 1.7.0 with GCC 4.6.2] pypynightly: 2.7.1 (c9343ef21049, Jan 28 2012, 02:54:12) [PyPy 1.8.1dev0 with GCC 4.4.3] java: java version "1.7.0_147icedtea" OpenJDK Runtime Environment (IcedTea7 2.0) (ArchLinux build 7.b147_2.05x86_64) OpenJDK 64Bit Server VM (build 21.0b17, mixed mode)
Looks like PyPy offers significant speed gains in certain situations… It might be worth testing out with your program.
This is ripped from an Algebra 1 textbook used for 13 yr olds in the US.
Years later I can say the vast majority of the thousand or so themes have one of the following problems:
The theme/design needs to get out of the way of the content. The theme itself is not the content.
I won’t post samples, but almost all of the themes have this issue. For years now the normal/minimum screen resolution is 1366×768 on laptops, and 1920x1080p on desktops, so why have the main content of your website be restricted to 600 pixels width? Fluid width themes that adapt to the width of the window are few and far between.
Much of the visual noise is in the form of links and multiple ways to navigate content on the site. Many themes expect you to use a calendar so people can access posts by day, and monthly links to content, and content by tag, and by category, and a tree of how the content is organized…
How many readers on a given blog want to look at any old post, let alone need multiple ways to navigate old content, especially when they could be looking through the content itself, rather than menus to navigate it. Maybe a few visitors would want/need to navigate content with many options, but that certainly shouldn’t be a large part of every single page on the site.
The examples above are not handselected over the years, look for yourself here, how many would you consider viable options?
Anyway so the choice is to either design my own theme, or to try to hack away at one to mold it into something I would be happy with. I decided on the latter this time, and chose this to start hacking.
http://thethemefoundry.com/paperpunch/
I have had a long running fascination with SVG as it allows for extremely light simple graphics, and now that IE9 is out, there’s no reason not to use it.
So I changed the main image for the theme to an embedded SVG document, but the shipping version of Safari choked on it, so I reference an svg document as an image and everyone’s happy. Yes, even Internet Explorer (9).
This picture I created/wrote is only a few lines of code:
<?xml version="1.0" encoding="UTF8" standalone="no"?> <!DOCTYPE svg PUBLIC "//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg xmlns="http://www.w3.org/2000/svg" height="230" width="230"> <polygon points="0,10 0,230 220,230 220,10" style="fill:#fff;stroke:#dedede;strokewidth:1px"/> <circle cx="115" cy="115" r="115" id="circ" style="fill:grey;"/> <polygon points="0,10 0,53 115,115" style="fill:#eeeeee;"/> <polygon points="0,53 0,85 115,115" style="fill:#8FB4FF;"/> <polygon points="0,85 0,115 115,115" style="fill:#1463FF;"/> <polygon points="0,115 0,145 115,115" style="fill:#0047D6;"/> <polygon points="0,145 0,184 115,115" style="fill:#8400c3;"/> <polygon points="0,184 0,230 115,115" style="fill:#ee1111;"/> <polygon points="0,230 44,230 115,115" style="fill:#228822;"/> <polygon points="44,230 79,230 115,115" style="fill:#FFCC55;"/> <polygon points="79,230 115,230 115,115" style="fill:#000000;"/> </svg>
Aside from the image, I modified the theme to fit my tastes:
I removed most images involved in the theme, making it lighterweight.
I removed the menus below the title.
I removed the overlylarge footer.
I added the search widget which was in the footer to the sidebar, and replaced the annoying (in code and function) search form placeholder text, with the new html5 placeholder attribute, (this doesn’t seem to work in IE9).
I removed the distractingly large comment image/count attached to each post in the index page. Also removed the RSS images/links, made a single feed link (text) that links to the atom feed.
I also widened the main column and while viewing a single post, I made it even wider, at the expensive of the sidebar. The site is now 1120 pixels wide.
I removed some funky scripting that modified the font for titles, this not only was clunky in general, it also didn’t work in IE9 without compatibility mode.
I made lots of little edits to the master css file, including adding visible lines in tables, removing most underlines from links and hovering over links.
The new theme broke the functionality of a Syntax Highlighter plugin, so I had to manually add the scripts to the header.
Added some spacing to fix alignment issues, and to create the illusion that part of my image on the top right is bursting out of its area.
added a “read more” svg/image icon so it doesn’t get obscured by the rest of the text/page.
In the end I like the results, I believe the site is now lightweight and extremely easy to read and navigate, there is little extra content trying to protrude, the single sidebar is narrow and only exists on the main page, while providing a fast way to click on past articles, no additional visual noise.
The next improvement I’d like to make to the site is to be able to log in with google/facebook/etc in order to post comments. This is surprisingly not easy to do with any existing wordpress plugin that I know of. There are plenty of plugins to allow people to log in elsewhere via openID with the accounts they created on your blog, but that’s obviously not the issue for most bloggers. Most sites are not worth a new account/giving email address/ just to comment.
As someone picky about details, I might say after my years with wordpress that I might have been better writing my own little CMS via django, and my own css/style to accompany it. The advantages of having a mainstream CMS is nearly zero, many plugins don’t work as advertised, or only for certain themes or older versions of wordpress. Most themes are unsuitable without lots of hacking.
A spaceship has 100,000 bits of storage, and one of these bits is known to be faulty. You can locate the faulty bit using agents that run on any given subset of bits and return "OK" if all of the bits are good and die if they encounter the faulty bit. It takes an agent one hour to run a query, regardless of the size of the subset, but an infinite number of agents can run simultaneously. You need to find the wrong bit in two hours. Since we must decide, in advance, how many agents to send with the spaceship, we are interested in the following questions: A. What is the minimal number of agents needed? (Bonus question: Find a formula for the number of agents needed for n bits and t hours). B. Suppose we want to send enough agents to be able to repeat the same task a second time with the remaining agents (i.e., those who did not die during the first invocation). How many agents are needed in that scenario? Update March 2nd: Different agents can access the same memory bit at the same time.
I’ve written up a handful of these ponderthis puzzles and continue to see value in doing so, not just for my own record, but also to help people get better at problem solving. I do this knowing that the ponderthis people publish a solution (which I’ve appended to this post), because the expanded path that I walk through is more helpful to those who might have problems, and it shows what I think is an honest look at what goes into understanding these things.
I do Strongly suggest trying the puzzle for yourself before reading my solution.
Enough with the prologue, let’s start:
Without considering any sort of concurrency (two agents testing the same bit at the same time):
we could send out 99,999 agents, assign 1 per bit, and see which one doesn’t come back, if they all come back it’s the bit that wasn’t checked.
we could send out 2 agents, 50,000 bits for each for the first hour, and then depending which one comes back we send out 49,999 agents to test those 50,000 bits, 1 per bit. A total of 50,000 agents would be needed.
we could send out 2 agents, 33,333 bits each, 33,334 untested, so that will (in the first hour) narrow down the problem to 33,333 or 33,334 bits, either way in the second hour we will need to add 33,331 agents, totaling 33,333 agents.
we could send out 3 agents, 25k bits each, 25k untested, so that will(in the first hour) narrow down the problem to 25,000 bits, and will require 24,999 bits to be sent out in hour two to find the bad bit. Totaling 25,000 agents needed.
So the first hour we partition the bits into smaller sets, the set from the agent that dies in the first hour will be examined 1bit per agent in the second hour to find the bit.
let’s call the number of agents sent out in the first hour a1, and the number of agents sent out in the second hour a2.
you can see that as we increase the number of agents sent out in the first hour, we decrease the number of agents required for the second hour. We know that the number of partitions created in the first hour is a1+1 (since we can narrow it down to the last partition if they all come back alive), and the size of each partition is 100,000/(a1+1). In the second hour, a2 must be one less than the size of the remaining partition (in order to test all but one in order to finger the faulty bit).
Without worrying about the details about agents dying and uneven partition sizes, lets try to minimize the total number of agents, and since at most 1 agent dies in the first hour, the total number of agents sent out is approximately the greater of a1 or a2.
we know a2 = 100,000/(a1+1) – 1
let’s approximate a2 as = 100,000/a1 since the two ones are nearly insignificant.
so we want to minimize max(a1,a2) which is roughly max(a1,100,000/a1)
as we increase a1, a2 decreases, for the entire range, so the minimal maximum of the two will be when they are equal to each other. a1=100,000/a1 > a1^2 = 100,000 > a1 = sqrt(100,000) ~= 316
so as a rough estimate the minimal number of agents we will need is around 316.
Sending out 316 agents the first hour, making 316 partitions of 316, and then in the 2nd hour sending out the 315 agents that returned alive after the first hour, 1 per bit for the final hour to find the bit in the partition.
This can be solved exactly, and the details can be worked out, but you will find the answer to be very near this, I’m not going to labor over it because we have not even considered concurrency yet… so lets do that:
Lets look at a basic example of concurrency:
We have 100,000 bits and lets say we send out 2 agents in the first hour, without concurrency we can partition the space into 3 equal sets (each 33,333/4 bits). With concurrency we can tell agent1 to check bit 150,000 agent2 to check 25,00175,000, so that if only agent1 dies we know the problem bit is in 125k, if they both die we know its 25,00150k, if only agent2 dies its 50,00175k, and if both survive it’s the untouched partition of 75,001100k, so instead of partitioning into 3 areas, concurrency allows us to use 2 agents to partition the bits into 4 sets. The cost is that both bits could die. As we increase the overlap of agents testing a given bit, the more agents could die in the worst case. Without any concurrency the worst case is 1 death per hour.
When the extra precision worth the extra loss of agents?
Let’s take a look at what a number of agents can do in 1 hour.
bit1  bit2  bit3  bit4  bit5  bit6  bit7  bit8  
agent1  check  check  check  check  
agent2  check  check  check  check  
agent3  check  check  check  check 
The above chart is an example where 3 agents can cover 8 bits completely. Looking at who comes back alive, we can determine exactly what bit is faulty. The downside is the worst case of 3 agent deaths.
here’s a table of what a number of agents can do in 1 hour:
total # of agents  min deaths
(min concurrency) 
max deaths(max concurrency)  size of set which the agents can determine the faulty bit. (w/concurrency)
(up to) 
bits covered per agent death (worst case)
w/concurrency 
size of set which the agents can solve. w/o concurrency (max death = 1)  bits covered per agent death (worst case)
w/o concurrency 

1  0  1  2  2  2  2  
2  0  2  4  2  3  3  
3  0  3  8  ~2.67  4  4  
4  0  4  16  4  5  5  
5  0  5  32  6.4  6  6  
6  0  6  64  ~10.67  7  7  
7  0  7  128  ~18.29  8  8  
8  0  8  256  32  9  9  
9  0  9  512  56.89  10  10  
a  0  a  2^a  (2^a)/a  a+1  a+1 
So even per death, (let alone per total agents required), concurrency is far better than nonconcurrency for a reasonably large set of bits.
From this we can simply set the size of the set which agents can solve in an hour (2^a) equal to 100,000, and solve for a. a needs to be 17 before 2^a is larger than 100,000… So 17 agents can find the faulty bit in 1 hour.
But we have 2 hours at our disposal, so maybe we can do better then 17.
Here again is the chart showing what 3 agents can do with concurrency, solving 8 bits in 1 round:
bit1  bit2  bit3  bit4  bit5  bit6  bit7  bit8  
agent1  check  check  check  check  
agent2  check  check  check  check  
agent3  check  check  check  check 
Now, instead of individual bits, imagine 8 mutually exclusive sets of bits: (check for a set, means the agent checks every bit in the set)
bit set 1  bit set 2  bit set 3  bit set 4  bit set 5  bit set 6  bit set 7  bit set 8  
agent1  check  check  check  check  
agent2  check  check  check  check  
agent3  check  check  check  check 
So just as 3 agents could find the faulty bit of 8 bits, they can also isolate the set with the faulty bit out of 8 sets… So the chart comparing bit coverage of concurrency verses nonconcurrency is also relevant for partitions, not just individual bits, here it is again with only a few words changed:
total # of agents  min deaths
(min concurrency) 
max deaths(max concurrency)  # of sets/partitions that can be narrowed down to 1.(w/concurrency)
(up to) 
sets covered per agent death (worst case)
w/concurrency 
# of sets/partitions that can be narrowed down to 1. w/o concurrency (max death = 1)  sets covered per agent death (worst case)
w/o concurrency 

1  0  1  2  2  2  2  
2  0  2  4  2  3  3  
3  0  3  8  ~2.67  4  4  
4  0  4  16  4  5  5  
5  0  5  32  6.4  6  6  
6  0  6  64  ~10.67  7  7  
7  0  7  128  ~18.29  8  8  
8  0  8  256  32  9  9  
9  0  9  512  56.89  10  10  
a  0  a  2^a  (2^a)/a  a+1  a+1 
So lets use the fact that we have two hours to split the problem into, it certainly helped a lot without using concurrency, lets see if it can help us with it.
Similar to earlier it seems natural that the best way to split the problem is to partition into 316 groups of about 316… So we would need 9 agents for the first hour, to narrow it down to 1 set, and then another 9 agents for the second hour to narrow it down to 1 bit. Total of 18 agents.
This is worse than what we can do in one hour, and looking at how this method works, lets try to exploit the most efficient set size for this method, which is 2^a, so lets try to partition into groups of 256 bits, there will be 391 groups. We know for the second hour we will need 8 agents to find the bit from 256, and we will need 9 agents in the first hour to select the partition out of the 391. This totals 17 agents required, and is only as good as what we can do in 1 hour.
There are two things that come to mind when thinking about possible improvements. One is if we send out 9 agents to partition the bits, then another 8 the 2nd hour, that means the 2nd group of 8 are simply waiting idle, for the first hour to end. They could possibly be utilized without risking having 8 in the 2nd hour? Also in the best case (least deaths), none of the 9 agents die in the first hour, and they are unused in the 2nd hour, seems inefficient.
Addressing the first thought, yes lets see how many partitions we can narrow down to one, assuming we limit the max concurrency so that we have 8 agents surviving in the worst case:
total # of agents  min deaths
(min concurrency) 
max deaths(max concurrency)  # of sets/partitions that can be narrowed down to 1.(w/concurrency)
(up to) 
partitions covered per agent death (worst case)
w/concurrency 
# of sets/partitions that can be narrowed down to 1. w/o concurrency (max death = 1)  partitions covered per agent death (worst case)
w/o concurrency 

1  n/a  n/a  n/a  n/a  
2  n/a  n/a  n/a  n/a  
3  n/a  n/a  n/a  n/a  
4  n/a  n/a  n/a  n/a  
5  n/a  n/a  n/a  n/a  
6  n/a  n/a  n/a  n/a  
7  n/a  n/a  n/a  n/a  
8  0  0  1  n/a  1  n/a  
9  0  1  10  10  10  10  
10  0  2  56  28  11  11  
11  0  3  232  77.33  12  12  
12  0  4  794  198.5  13  13  
13  0  5  2380  476  14  14  
14  0  6  6476  1079.33  15  15  
a  0  a8  sum(ncr(a,k),k=0..(a8))  (2^a)/a  a+1  a+1 
note that ncr(n,r) is a function which gives the number of ways to select r items from n items where order of selection is irrelevant, and duplicate selection is not allowed. You may have seen it in combinatorics as a combination, “n choose r” (nCr). Google it if its not familiar.
So we can use 12 agents to solve 794 bits/sets with at least 8 agents surviving. So in our problem of 100,000 bits we can use 12 agents to split it into 391 partitions of 256 bits in the first hour, and then use 8 that come back alive in the 2nd hour to isolate the individual bit from the 256. With this method a total of 12 agents are required.
What about my 2nd thought, about how in the best case (or any nonworst case) we don’t utilize the agents that survive in the first hour.
If the faulty bit is in the partition that is tested by no agents, or only a few agents in the first hour, we will be left with more agents in the 2nd hour than necessary to finish. So we actually have enough agents in these nonworstcases to scan a larger partition in the second hour, so maybe we should make these partitions larger? That might result in less total agents required to cover 100,000 bits.
How much bigger?
Well, we know in the last hour we will need enough agents alive to solve the size of the partition with the faulty bit. If the size of the partition is S, and we know ‘a’ agents can isolate a faulty bit out of 2^a, then 2^a>=S, and solving for a, a>=lg(S).
so for a given partition, if the number of agents that don’t check it is An, that will be the number that will survive, so if An agents don’t check a partition, it can be as large as 2^An bits in size.
so the partition that is untested in hour1 can be 2^a large, while the partitions that are test by 1 agent, can be 2^(a1) bits in size, and so on… lets create a chart with 3 agents:
partition 1  partition 2  partition 3  partition 4  partition 5  partition 6  partition 7  partition 8  
agent1  check  check  check  check  
agent2  check  check  check  check  
agent3  check  check  check  check 
now lets look at what the size of each partition can be, in order for the bit to be isolated in the next hour by the surviving agents.
# of agents that checked  Number of agents that will survive if the faulty bit exists in this partitoin.
(# if agents that didn’t check it) 
size that the partition should be (up to) in bits. This is the number of bits which the surviving agents could solve in 1 hour.  
partition 1  0  3  8 
partition 2  1  2  4 
partition 3  1  2  4 
partition 4  1  2  4 
partition 5  2  1  2 
partition 6  2  1  2 
partition 7  2  1  2 
partition 8  3  0  1 
So the total number of bits that 3 agents could process to find the 1 faulty bit in 2 hours is 27.
‘a’ agents can split up into 2^a partitions, and each partitions size varies depending on the number of agents testing it.
So for a given number of agents ‘a’, there will be
# of agents which check a partition  # of agents who consequently didn’t check.
(# that will survive if faulty bit is there) 
max size of each partition
(based on the number that will survive) 
number of partitions with this many agents checking.  Total bits covered by all partitions scanned by the number of agents characterized in each row. 
0  a0  2^(a0)  ncr(a,0)  2^(a0)*ncr(a,0) 
1  a1  2^(a1)  ncr(a,1)  2^(a1)*ncr(a,1) 
2  a2  2^(a2)  ncr(a,2)  2^(a2)*ncr(a,2) 
3  a3  2^(a3)  ncr(a,3)  2^(a3)*ncr(a,3) 
k  ak  2^(ak)  ncr(a,ak)  2^(ak)*ncr(a,ak) 
a2  a(a2)  2^(a(a2))  ncr(a,a(a2))  2^(a(a2))*ncr(a,a(a2)) 
a1  a(a1)  2^(a(a1))  ncr(a,a(a1))  2^(a(a1))*ncr(a,a(a1)) 
a  aa  2^(aa)  ncr(a,aa)  2^(aa)*ncr(a,aa) 
so in total we will have to add the column of the chart that depicts the total bits covered by all partitions characterized by each row.
That is Sum( 2^(ak) * ncr(a,ak) , k=0..a ) , I can also rewrite this as Sum( 2^k * ncr(a,k), k=0..a) which actually simplifies to 3^a. So in two hours, the number of bits that ‘a’ agents can solve is 3^a.
number of agents  number of bits solvable in 2 hours 
1  3 
2  9 
3  27 
4  81 
5  243 
6  729 
7  2187 
8  6561 
9  19683 
10  59049 
11  177147 
12  531441 
a  3^a 
So in 2 hours we can solve up to 177,147 bits with 11 agents, and certainly 100,000 which is question A of this puzzle.
It also asks for a general formula for n bits and t hours…. well we now know ‘a’ agents can cover 3^a bits with 2 hours, 2^a bits with 2 hours (as mentioned earlier), and presumably (t+1)^a bits with t hours. solving for a,
(t+1)^a = nbits, a = log(nbits)/log(t+1), and if its not an integer, we need to round up, so technically
agents = ceiling( log( number of bits) / log( number of hours + 1) )
that is the bonus part of question A.
Question B asks, what if we need to accomplish this task of finding 1 bad bit out of 100,000 twice… well we know for the final attempt we will need 11 agents alive, so lets figure out how we can do it once, with a worst case of 11 agents surviving. We just need to figure out how many are needed when we restrict the concurrency to 11 less then the total for the entire 2hour process.
so instead of Sum( 2^(ak) * ncr(a,ak) , k=0..a ) for the number of bits a agents can figure out in two hours, during the first task ‘a’ agents will only be able to process this sum( ncr(a, k) * sum(ncr(k, h), h = 11 .. k) , k = 11 .. a ) many bits with ‘a’ agents, because we use less concurrency, since 11 need to survive.
So to answer question B, we simply find the minimum a, such that:
sum(ncr(a, k)*(sum(ncr(k, h), h = 11 .. k)), k = 11 .. a) > 10^5
and that solution is a=16.
So we need 16 agents to be able to accomplish the task twice.
Admittedly this walkthrough of question B is brief and maybe rushed, but if you followed A, you should be able to connect the dots and fill in between the lines.
Funstuff.
IBM published their solution:
Using n agents in t hours, we can discover a faulty bit from a set of (t+1)^n bits. We prove that using induction on t. The base (t=0) is trivial. For larger t's, we write the bit addresses in base t+1 and give the ith agent to test the numbers whose ith digit is t. After an hour, we are left with k living agents, nk known digits, and k digits with only t possible values, which suits the induction. If we have two hours to find the faulty bit (t=2), we need 11 (floor(log_3 100,000)) agents to solve part A. One possible solution is 3^11, which is 77,147 more than 100,000. 77,147 is the dollar sum racked up by IBM's Watson while winning Jeopardy! last month. Having more agents can help us lose less of them. For example, if we have 100,000 agents, we can find the faulty bit in one hour while losing only one agent. Careful calculation reveals that using 16 agents, we can choose a set of 100,000 16digit numbers in base 3 that do not contain the digit 2 more than five times. This guarantees that at least 11 agents would live for the second part.
Last month’s ponderthis puzzle caught my eye, so I decided to give it a whirl.
in short, its to find an integer n which has 3 properties:
A: n is flippable and flip( n) = n
B: n*n is flippable
C: n is divisible by 2011
where the flip of an integer, is the number that appears when the decimal representation in old style 7segment font is rotated 180 degrees.
I will walk through how my solution evolved.
We could just increment integers, checking each one for all 3 conditions, and the easiest shortcut that seems obvious is to increment by 2011, (only checking numbers that satisfy condition C). Let’s look at the code that does this:
import time def flip(n): if n in ["0","1","2","5","8"]: return n if n == "6": return "9" if n == "9": return "6" return False def flippable(z): for s in str(z): if s in ["3","4","7"]: return False return True def flipnisn(z): strz = str(z) lenz = len(strz) for y in range(0,int(lenz/2)): if flip(strz[y]) != strz[lenzy1]: return False return True t0 = time.time() x=0 add = 2011 while True: x+=add if flippable(x): if flipnisn(x): if flippable(x*x): print(str(x)+ " is flippable") print("flip("+str(x)+") = " + str(x)) print("n^2 = "+str(x**2)+" is flippable") break print(str(int(time.time()  t0)) + " seconds")
output:
5000261920005 is flippable flip(5000261920005) = 5000261920005 n^2 = 25002619268652089019200025 is flippable 2073 seconds
This is simple and condense, but uses lots of conversions between integers and their string representations to the individual digits of the base10 integer.
Imagine a 20 digit decimal integer, for the vast majority of multiples of 2011, from one to the next only 4 digits actually change, so it’s silly to convert the entire integer each time to a string to check for flippability conditions.
Using a list of digits as the base10 representation of the number, adding 2011 each iteration, and having the logic to do the carryover properly when necessary… while it may be extra code, and will likely be slower for small numbers, will at some point prove better as the number of digits in the integer is large compared to 4.
lets look at an implementation:
import time def flip(n): if n in [0,1,2,5,8]: return n if n == 6: return 9 if n == 9: return 6 return False def int2list(z): n = [] while z!=0: n.append(z%10) z = int(z/10) return n def add2list(listx,add): again = True i=0 lenadd = len(add) while again: again = False if lenadd>i: listx[i]+=add[i] if lenadd>i+1: again = True if listx[i]>9: if len(listx)<=i+1: listx.append(int(listx[i]/10)) else: listx[i+1]+=int(listx[i]/10) if listx[i+1]>9: again = True listx[i] = listx[i]%10 i+=1 return def flippable(z): for d in z: if d in [3,4,7]: return False return True def flipnisn(z): lenz=len(z) for y in range(0,lenz): if flip(z[y]) != z[lenzy1]: return False return True t0 = time.time() add = 2011 listx = int2list(add) addlist = int2list(add) for n in range(2,10**20): add2list(listx,addlist) if flippable(listx): if flipnisn(listx): x=n*add if flippable(int2list(x**2)): print(str(x)+ " is flippable") print("flip("+str(x)+") = " + str(x)) print("n^2 = "+str(x**2)+" is flippable") break print(str(int(time.time()  t0)) + " seconds")
This is actually slower for small numbers, but eventually it is less work for the computer to manage, and becomes quicker at very large numbers.
Still, It’s not very smart, It iterates through and examines countless multiples of 2011 which could and should be skipped entirely… which
The number 16088002011000000000000 is a multiple of 2011, but flip(16088002011000000000000) doesn’t equal 16088002011000000000000, so property B is false. This is a 23 digit number, and the 11th digit is 1, while it’s flipped position (13th) is the digit 0. this means we will need to change the 11th digit before even having a chance for the number satisfying property B. so, there’s no need to examine as we add 2011 until we change the 11th digit, and since the 11th digit’s unit change is 10^10, we will need to add it (10^10/2011) times before there’s even a chance to satisfy property B.
A number like 40220000000000000000 has no chance of being even flippable until the leading digit changes, which is ~ 5*10^15 additions later…
So this is clearly not a perfect algorithm we have here.
The above programs are good enough to find the smallest integer that satisfies the conditions of the puzzle, but even with 24hrs of computation time, similar attempts don’t find the extra credit number which asks for the same conditions except for a remainder of 100 when divided by 2011.
Lets do some basic math.
for any integer:
Property A:
well each digit has a 7 in 10 chance of being flippable, and in base 10 an integer has log( n ) digits, so the probability that n is flippable is (7/10)^log(n)
for a 10 digit number, its:
and for each flippable number, the odds that flip(n) = n is, well for each digit it’s 1 in 7 probability that its flipped value is what the mirrored digit spot has.
so basically the odds that the first half of the digits in a number (when flipped) are the corresponding digits on the other side of the number is (1/7)^int(log(n)/2)
probability  n is flippable  flip(n)=n(assuming n is flippable)  both(Property A) 
n  (7/10)^int(log(n)+1)  (1/7)^int((log(n)+1)/2)  (7/10)^int(log(n)+1) *(1/7)^int((log(n)+1)/2) 
n~=5*10^4  1.7 * 10^1  2.0 * 10^2  3.4 * 10^3 
n~=5*10^8  4.0 * 10^2  4.2 * 10^4  1.7 * 10^5 
n~=5*10^16  2.3 * 10^3  1.7 * 10^7  4.0 * 10^10 
n~=5*10^32  7.7 * 10^6  3.0 * 10^14  2.3 * 10^19 
Property B: (is n^2 flippable)
well n^2 has int(log(n^2)+1) digits, each one has a 7/10th chance of being flippable, so n^2’s probability of being flippable is (7/10)^int(log(n^2)+1).
probability that  n^2 if flippable 
n  (7/10)^int(log(n^2)+1) 
n~=5*10^4  3.9 * 10^4 
n~=5*10^8  4.5 * 10^7 
n~=5*10^16  1.2 * 10^12 
n~=5*10^32  4.1 * 10^24 
Property C: (n is divisible by 2011)
any integer n has 1/2011 ~= 5 * 10^4 probability of being divisible by 2011
approximate probability  n is flippable and flip(n)=n  n^2 is flippable  n is divisible by 2011 
n~=5*10^4  3.4 * 10^3  3.9 * 10^4  5.0 * 10^4 
n~=5*10^8  1.7 * 10^5  4.5 * 10^7  5.0 * 10^4 
n~=5*10^16  4.0 * 10^10  1.2 * 10^12  5.0 * 10^4 
n~=5*10^32  2.3 * 10^19  4.1 * 10^24  5.0 * 10^4 
so by these numbers, its clear that building numbers divisible by 2011 and then checking the other conditions is the a really bad methodology for large numbers, since being divisible by 2011 is incredibly common relative to the other conditions. and building numbers using the rarest condition is best, as it means fewer numbers checked before finding a match on all properties.
It’s therefore tempting to build huge numbers that are flippable, (huge numbers being n^2), then taking the square root and checking property A and then C. The problem with this of course that most flippable numbers do not have integer square roots, lets look at that bit of probability:
approximate probability that  n^2 is flippable  an integer around n^2’s magnitude has an integer square root 
n  (7/10)^int(log(n^2)+1)  1/((n+1)^2 – n^2) = 1/(2n+1) 
n~=5*10^4  3.9 * 10^4  1.0 * 10^5 
n~=5*10^8  4.5 * 10^7  1.0 * 10^9 
n~=5*10^16  1.2 * 10^12  1.0 * 10^17 
n~=5*10^32  4.1 * 10^24  1.0 * 10^33 
so you will have to build 2n+1 numbers (approximately) to increment n once. In other words its rareer that it’s a perfect square, then flippable.
My next attempt was incrementing (although not exactly in order) through flippable integers, and appending each with its own flipped mirror image. so lets say I have 1230556, I would append to the left 9550321 to create 95503211230556, which is flippable, and who’s flipped value is itself. This is not immediately easy to do, as the vast majority of numbers, especially as you get large aren’t flippable, ( we computed the probability that n is flippable is about (7/10)^int(log(n)+1) ). Well being flippable simply means no digit in the decimal representation is 3 or 4 or 7. So every flippable digit has 7 options instead of 10, that seems a lot like base 7 to me.
So I’m going to increment flippable integers by incrementing a base 7 integer. each base 7 digit represents one of the filppable base 10 digits, and simply transform it to the flippable decimal with:
0>0, 1>1, 2>2, 3>5, 4>6, 5>8, 6>9 per digit.
Even if python could innately give me the base7 string of an integer, we know from our first couple attempts that converting each and every huge integer we create into a string isn’t entirely wise, as most of the work is repeated for huge numbers. So I used a list of integers, values ranging from 0 through 7, each index represents a digit’s place, and the integer value at each index is the digit’s value. Just as we did before with our array code, I have an add/carryover function to deal with incrementing our base 7 number, this is rarely taxed because the vast majority of increments don’t involve (m)any carryovers.
import time def incementlistx(listb7): idx = 0 go = True listb7[idx]+=1 while go: go=False if listb7[idx]==7: if idx == 0: listb7[idx]=1 else: listb7[idx]=0 if len(listb7) <= idx+1: listb7=[1] for x in range(0,idx+1): listb7.append(0) break listb7[idx+1]+=1 if listb7[idx+1]==7: go=True idx+=1 return listb7 def cdecimal(listb7): #generates two integers from listb7, assuming there is a mirrored image (flipped) of listb7 to the left of it. #one for even , and one for odd number of digits. odd=0 even=0 for i in range(0,len(listb7)): odd+=o[listb7[i]]*10**i if i!=len(listb7)1: odd+=f[listb7[i]]*10**(2*(len(listb7)1)i) even+=o[listb7[i]]*10**i even+=f[listb7[i]]*10**(2*(len(listb7))1i) return [odd,even] def isflippable(x): for c in str(x): if c in ["3","4","7"]: return False return True o=[0,1,2,5,6,8,9] f=[0,1,2,5,9,8,6] t0 = time.time() listb7 = [1] #listb7 is a list of base7 digits that represents the right half of the integer n. while True: listb7=incementlistx(listb7) for n in cdecimal(listb7): r = n%2011 if r==100 or r==0: if isflippable(n**2): print(str(n) + " has a remainder of "+str(r)+" when divided by 2011") print(str(int(time.time()  t0)) + " seconds")
outputs:
5000261920005 has a remainder of 0 when divided by 2011 6 seconds 5001008001005 has a remainder of 0 when divided by 2011 13 seconds 1062212556552122901 has a remainder of 0 when divided by 2011 6112 seconds 106068858858858890901 has a remainder of 0 when divided by 2011 41447 seconds 500555602966209555005 has a remainder of 0 when divided by 2011 47190 seconds 10962025901110652029601 has a remainder of 100 when divided by 2011 140140 seconds
This code finds the first answer in 6 seconds, and our first attempt at the top of the page took 35 minutes to do the same task. (!!!)
It takes a while, but it does find the smallest number satisfying the extra credit conditions.
Note: all of these little programs will probably be around a thousand times faster if written in java or C/C++.
So that concludes my time with this puzzle, but if I wanted to make an even better/faster piece of software for a similar task lets look at those probabilities again:
approximate probability  n is flippable and flip(n)=n  n^2 is flippable  n is divisible by 2011 
n~=5*10^4  3.4 * 10^3  3.9 * 10^4  5.0 * 10^4 
n~=5*10^8  1.7 * 10^5  4.5 * 10^7  5.0 * 10^4 
n~=5*10^16  4.0 * 10^10  1.2 * 10^12  5.0 * 10^4 
n~=5*10^32  2.3 * 10^19  4.1 * 10^24  5.0 * 10^4 
approximate probability that  n^2 is flippable  an integer around n^2’s magnitude has an integer square root 
n  (7/10)^int(log(n^2)+1)  1/((n+1)^2 – n^2) = 1/(2n+1) 
n~=5*10^4  3.9 * 10^4  1.0 * 10^5 
n~=5*10^8  4.5 * 10^7  1.0 * 10^9 
n~=5*10^16  1.2 * 10^12  1.0 * 10^17 
n~=5*10^32  4.1 * 10^24  1.0 * 10^33 
All things equal, if there were multiple properties to check, and it was equally as costly to check each, you’d check the rarest first, so you can eliminate the number faster, minimizing the total number of checks.
Clearly the rarest thing for a number n is that it’s square is flippable, so maybe we want to increment through the perfect squares, checking if the number is flippable, then checking if the square root is flippable and if flip of the square root is itself, then checking the remainder when divided by 2011… but how do you increment through perfect squares? well that’s easy.
for any perfect square n^2, the next perfect square is (n+1)^2, which is n^2+2n+1, so if we know n^2 (the current perfect square), we can add 2n+1 to jump to the next one, checking if its flippable at each jump.
Let me know if you think this would indeed be better or not, why?
I’ve been generally unhappy with the laptop choices recently and was curious to see what apple would do to their macbook airs.
Apple seems to be the only company who can take a step back and try to design an ideal product, rather than tweak the same design over and over again.
What is the perfect ultra portable laptop?
The major step apple took with their new ultraportable is enclosureless solid state storage, this kind of solution was marketed at ultralowcost products by SanDisk not including an enclosure is a way to save pennies, just like smaller cars in the 80s/90s in the US, are for those who can’t afford more.
Most manufactures have continued to design around 2.5” laptop drives, even on systems offering SSDs, in order to cater towards the masses who look for lots of “GB”s. Also since almost all designs up to now has catered towards 2.5” enclosures, the lazy way is to simply keep doing it.
Here is the data I collected:
The upsetting number there is “2008” under the new apple products. The Core 2 Duo’s used in the brand new MacBook Air were launched by intel in 2008.
Apple is using them because they want to use the chipset that they have been using for awhile now, the NVIDIA 320M, which is a singlechip southbridge/IGP for socket 775 CPUs. New Intel CPUs put the IGP onpackage in the CPU, and although the newer CPU unit is smaller/lowerpower (32nm vs 45nm) and faster (in performance per clock, per watt, and clock speed with turbo boost when thermals allow), the onpackage IGP is not good enough for some modern computergames (fullscreen 3d games like starcraft 2, not farmville). Apple chooses to use a worse processor in order to connect it to a more powerful graphics IGP. The nvidia IGP is not noticably better than the intel IGP for encoding/decoding videos, operating system graphics, or the 2d games, the only advantage the nvidia IGP has is in fullscreen immersive 3D games, and although it’s significantly faster, it’s still pretty slow compared to a computer or laptop meant for gaming.
And packing in the two most power hungry chips in the same place has some physical advantages, here you can see the simplified cooling system Lenovo was able to use in their new Thinkpad Edge 11, the top being a separate CPU/IGP design, the bottom being the modern intel CPU/IGP in one package:
(note that the Southbridge without IGP doesn’t produce enough heat to require cooling)
Clearly the latter would reduce the complexity of the laptop physically.
The apple 11.6” deserves some props for the input system, a full size keyboard and fantastic touchpad, it joins the edge 11, as the only 11” laptops with decent input/pointing devices.
Final Words>
This formfactor is clearly the future, I’m excited to see attemps made at an ideal UltraPortable.
Mistakes I’ve seen the players make in their ultraportable attempts:
I believe 2011 with nextgen large SSDs from the likes of intel, and sandy bridge LV/ULV CPUs with an ondie 32nm GPU/IGP make the year very exciting for this market. There have also been rumors that Acer (and I’m sure others) are making progress on a frameless screen, from the data I collected 65% area use is normal today, 100%t would obviously be a huge step forward.
I can’t really recommend anything here, they all have significant issues, the Edge 11 is pretty decent, but not really thin.
The prime number 4535653, when translated to base 16, gives the hexadecimal number 0x453565, which has the same digits as the original number, omitting the last digit.
Find another example of a prime number that, translated to hexadecimal, yields the same digits, omitting the last 21 digits
This was the Ponder This Challenge for April 2010.
It relies on positional base numeral systems which are widely used and rarely thought about in our culture.
Let’s consider the 7digit decimal (base 10) number: 4535653
digit’s place  7  6  5  4  3  2  1 
unit value  10^6  10^5  10^4  10^3  10^2  10^1  10^0 
digit integer  4  5  3  5  6  5  3 
value per place  4*10^6  5*10^5  3*10^4  5*10^3  6*10^2  5*10^1  3*10^0 
So to get the value for the entire 7digit number, we simply sum the values per each place of the decimal number, which is
4*10^6 + 5*10^5 + 3*10^4 + 5*10^3 + 6*10^2 + 5*10^1 + 3*10^0
which = (as written in decimal) 4535653
I’ll do the same table for the hexadecimal (base 16) number 453565:
digit’s place  6  5  4  3  2  1 
unit value  16^5  16^4  16^3  16^2  16^1  16^0 
digit  4  5  3  5  6  5 
value per place  4*16^5  5*16^4  3*16^3  5*16^2  6*16^1  5*16^0 
4*16^5 + 5*16^4 + 3*16^3 + 5*16^2 + 6*16^1 + 5*16^0 = (decimal) 4535653 = (hexadecimal) 453565
Let’s get back to the problem, they want us to find a number with two properties:
1. the hexadecimal representation is identical to the decimal representation omitting the last 21 digits.
2. The Integer is Prime.
for now lets examine #1 for a moment, How do we find numbers with that property…
well each digit in a number of any base normally has this value to contribute:
unit_value * digit_integer
where:
unit_value = base ^ (digit_place – 1)
0 <= digit_integer < base , so max(digit_integer) = base – 1
maximum_cumulative_value_of_previous_digits(digit_place) = sum(max(digit_integer)*unit_value(p), p=1..digit_place – 1)
which = unit_value(digit_place) – 1
let’s declare a couple variables
n is an integer
hexn is the base 16 representation of n
decn is a base 10 number constructed by appending 21 0s to the right of the literal hexn.
for this problem, the decimal number must be of equal value to the hex number, but must share all digits except for the last 21…
This means that we can use the last 21 digits of the decimal number to make up for any discrepancy of value, and 21 digits of decimal can get up to sum ( 9 * 10^place1,place = 1..21) = 999999999999999999999 = 10^21 – 1
So any integer n, where hexn – decn is positive and no more than 10^21 – 1 is a candidate to be a solution.
our integer n will satisfy the first condition of the puzzle IFF:
0 <= hexn – decn < 10^21
So let’s design a different positional numeral system to examine the value of this difference hexn – decn.
let’s define a numeral system where the value is the contribution to hexndecn, and the character set are 09
special_unit_value(digit_place) = the difference in unit value in the hex to the unit value in the decimal after 21 0s are appended…
special_unit_value(digit_place) = 16^(digit_place – 1) – 10^(21+ digit_place – 1)
so we see from the graph above that we can get positive values if we get enough digits in the hex number, let’s look at a more detailed chart of unit_value per digit_place:
we see the minimum cumulative value for all digits maxes out at
3653862699613613450061613042885398017647315043542900294867371565365704404131411080078183579778733265462124364559775760279143
~=3.65*10^123
our first positive unit value is at digit 104 where special_unit_value = +576895500643977583230644928524336637254474927428499508554380724390492659780981533203027367035444557561459392400373732868
096
~=+5.77*10^122
This is good, very positive, but not nearly too big for the negative contribution of all the previous digits.
The next positive special_unit_value at place 105 = +69230328010303641331690318856389386196071598838855992136870091590247882556495704531248437872567112920983350278405979725889536
~=+6.92*10^124
this tells us that our number cannot even have a single nonnegative integer after place 104, because the 105th special_unit_value is so much larger than all the negative contribution the lesser digits can make, in fact if we max out the negative for all the lesser digits, it only subtracts about 0.5% of the value of 1 unit of the 105th digit.
(6.23*10^125 – 3.65*10^123 = 6.19*10^125)
Which is way larger than 10^21, the maximum acceptable special_value in this problem.
so hexn must be 104 digits exactly.
we could increment n or hexn , starting at 16^105, but to cover all 104 digit hex integers with digits [09], that will take 9*10^103 tests to see if 0<hexndecn<10^21
9*10^103~=which is ~2^345.3285, which isn’t even close to feasible.
Instead we can go from the 104th digit to lesser digits, only considering values which (considering the maximum negative cumulative value of lesser digits) can yield a value within our range of (0,10^21).
here’s the code:
def specialUnitValue(place): return 16**(place1)  10**(21+place1) def maximumCumulativeValue(place): r=0 for x in range(1,place): r+=9*max(specialUnitValue(x),0) return r def minimumCumulativeValue(place): r=0 for x in range(1,place): r+=9*min(specialUnitValue(x),0) return r def getValue(digit_list): v=0 digit_list.reverse() p=0 for digit_value in digit_list: p+=1 v+=digit_value*(16**(p1)) digit_list.reverse() return v def recursiveFilter(place,digit_list,current_value): if place == 0: number_list.append(getValue(digit_list)) return for x in range(0,10): new_current_value = current_value + x*specialUnitValue(place) if maximumCumulativeValue(place) <= new_current_value < 10**21  minimumCumulativeValue(place): recursiveFilter(place1,digit_list+[x],new_current_value) return place = 104 digit_list = [] number_list =[] current_value = 0 recursiveFilter(place,digit_list,current_value) print("the following numbers are represented exactly the same in hex and decimal, omitting the last 21 digits of the decimal representation:") for n in number_list: print() print("(dec)",str(n)) print("(hex)",str(hex(n))[2:])
outputs:
the following numbers are represented exactly the same in hex and decimal, omitting the last 21 digits of the decimal representation: (dec) 0 (hex) 0 (dec) 10964973879145266684597918386961206640359744239118799335560543882943204057285999161085137407678295834996206085978967677028758 (hex) 10964973879145266684597918386961206640359744239118799335560543882943204057285999161085137407678295834996 (dec) 10965042548987794527084057351766268905929738560121836183995663823085815102408704908398080417506152437998396385865902388902296 (hex) 10965042548987794527084057351766268905929738560121836183995663823085815102408704908398080417506152437998 (dec) 11382975651657610360011724373059527023679401235446000875434984276671144440904563609247580929003959820277226372565581487538807 (hex) 11382975651657610360011724373059527023679401235446000875434984276671144440904563609247580929003959820277 (dec) 11383044321503994265174950355726300568630049590682850168652076962700460784689758874865273083169197842932705865806646564104498 (hex) 11383044321503994265174950355726300568630049590682850168652076962700460784689758874865273083169197842932 (dec) 11403774832697699927191683625580629545730840588452432470080875397274364900728637535263415299444961605674432192938509636490868 (hex) 11403774832697699927191683625580629545730840588452432470080875397274364900728637535263415299444961605674 (dec) 22786823350482228205608199978099471169330070448099877920884203755897679528961138686897878221599372454029885357802030895546409 (hex) 22786823350482228205608199978099471169330070448099877920884203755897679528961138686897878221599372454029 (dec) 22807622531522543610094414106106772108649683164642972183286269214619099352157014522058098996385254903281545695887189603267201 (hex) 22807622531522543610094414106106772108649683164642972183286269214619099352157014522058098996385254903281 (dec) 23225624303976723017196220515917404099608854985718038196616317054667018854359591688735436052791332652478513355408150356108408 (hex) 23225624303976723017196220515917404099608854985718038196616317054667018854359591688735436052791332652478 (dec) 34629467806524663910488411411813679561429784816827345905892170553651646560885008408033429786692704516617255059441377587848727 (hex) 34629467806524663910488411411813679561429784816827345905892170553651646560885008408033429786692704516617 (dec) 34630614434901308852464200036192701310784464271605203972278835294765396501224597514208949899168821013783738303866386899875715 (hex) 34630614434901308852464200036192701310784464271605203972278835294765396501224597514208949899168821013783 (dec) 35047469578049837366880709740625417027160304672000058413742228775833471575947576436574766395836776928100142757473522419925248 (hex) 35047469578049837366880709740625417027160304672000058413742228775833471575947576436574766395836776928100 (dec) 46452459709903429153073283753038447777798210559040188704150600559955356774538872177279968790528754691766851932291804282034022 (hex) 46452459709903429153073283753038447777798210559040188704150600559955356774538872177279968790528754691766 (dec) 57856307408728032899122821481744958592677162176215705623596847999672634661589955685836313952596116104916212444885882411960598 (hex) 57856307408728032899122821481744958592677162176215705623596847999672634661589955685836313952596116104916 (dec) 58274309180311384671530417444881588845667623071694823539992371381096007813431725768328081725978996158096888392730430747803798 (hex) 58274309180311384671530417444881588845667623071694823539992371381096007813431725768328081725978996158096 (dec) 69678152925760181390564134659968588468637327084560080449901460622205042762612707757129557846642035594344508706235255466050372 (hex) 69678152925760181390564134659968588468637327084560080449901460622205042762612707757129557846642035594344 (dec) 69700094553935846709056452304209818063563632357808257330729205642171418398171600992212697023360366697360024549290701595702112 (hex) 69700094553935846709056452304209818063563632357808257330729205642171418398171600992212697023360366697360
This is the comprehensive list of numbers satisfying condition #1 of the puzzle, #2 asks for one of these which are prime, so lets do a very simplistic prime test on these:
number_set=set(number_list) not_prime_set=set() import time for n in number_set: t0=time.time() for x in range(2,n): if n%x ==0: not_prime_set.add(n) print(n,"is not prime because it's divisble by",x) break if time.time()t0>100: print(n,"may be prime, did not find any factors within 100 seconds") break print("the remaining numbers to check for primality are:", number_set  not_prime_set)
outputs:
0 is not prime because 0 isn't prime 57856307408728032899122821481744958592677162176215705623596847999672634661589955685836313952596116104916212444885882411960598 is not prime because it's divisble by 2 35047469578049837366880709740625417027160304672000058413742228775833471575947576436574766395836776928100142757473522419925248 is not prime because it's divisble by 2 46452459709903429153073283753038447777798210559040188704150600559955356774538872177279968790528754691766851932291804282034022 is not prime because it's divisble by 2 58274309180311384671530417444881588845667623071694823539992371381096007813431725768328081725978996158096888392730430747803798 is not prime because it's divisble by 2 23225624303976723017196220515917404099608854985718038196616317054667018854359591688735436052791332652478513355408150356108408 is not prime because it's divisble by 2 11382975651657610360011724373059527023679401235446000875434984276671144440904563609247580929003959820277226372565581487538807 is not prime because it's divisble by 3 69678152925760181390564134659968588468637327084560080449901460622205042762612707757129557846642035594344508706235255466050372 is not prime because it's divisble by 2 10965042548987794527084057351766268905929738560121836183995663823085815102408704908398080417506152437998396385865902388902296 is not prime because it's divisble by 2 11403774832697699927191683625580629545730840588452432470080875397274364900728637535263415299444961605674432192938509636490868 is not prime because it's divisble by 2 22807622531522543610094414106106772108649683164642972183286269214619099352157014522058098996385254903281545695887189603267201 may be prime, did not find any factors within 100 seconds 34629467806524663910488411411813679561429784816827345905892170553651646560885008408033429786692704516617255059441377587848727 is not prime because it's divisble by 3 69700094553935846709056452304209818063563632357808257330729205642171418398171600992212697023360366697360024549290701595702112 is not prime because it's divisble by 2 22786823350482228205608199978099471169330070448099877920884203755897679528961138686897878221599372454029885357802030895546409 is not prime because it's divisble by 7 10964973879145266684597918386961206640359744239118799335560543882943204057285999161085137407678295834996206085978967677028758 is not prime because it's divisble by 2 34630614434901308852464200036192701310784464271605203972278835294765396501224597514208949899168821013783738303866386899875715 is not prime because it's divisble by 3 11383044321503994265174950355726300568630049590682850168652076962700460784689758874865273083169197842932705865806646564104498 is not prime because it's divisble by 2 the remaining numbers to check for primality are: {22807622531522543610094414106106772108649683164642972183286269214619099352157014522058098996385254903281545695887189603267201}
So the only number of these which could possibly be prime is:
22807622531522543610094414106106772108649683164642972183286269214619099352157014522058098996385254903281545695887189603267201
I will save this nontrivial primality test for another time, but we can already say that if the puzzle has a solution, this is it.
I settled on the HP ZR24w, as it was an IPS panel with good colors and viewing angles (IPS panel), had very little input lag, could rotate for 16:10 or 10:16 use, and wasn’t ridiculously priced like most IPS panels.
on HP’s page they are boasting a lot of energy saving stuff like an 85% power supply. But forgive me for not really trusting their energyefficient marketing, so I started comparing some LCD power consumption and found some striking differences. Take a look at the little table I constructed:
other sources:
What a wide range of power consumption, even at the same brightness. There are no sanelypriced efficient IPSpanel 24inch displays. On this data it seems LEDs use less energy (no surprise), but maybe IPS panels need more power than tn? I say that because the apple display is LED and IPS and still guzzles watts (oddly even more than the dell IPS ccfl display).