I have a limited form of perfect (absolute) pitch, which I am sometimes asked about. Often it’s the same questions, so here they are. No doubt people with better perfect pitch than mine will be annoyed at this impudent upstart claiming the ability, but perfect pitch comes on a spectrum anyway. Apparently some people can identify notes to within the nearest fifth of a semitone, while some can only identify the semitone closest to the note.

A couple of weeks ago, someone opined to me that there was a type of person who was just able to sit down and play at the piano, without sheet music. I, myself, am capable of playing precisely one piece inexpertly, from memory, at the piano. (My rendering of that piece is nowhere near the arranger’s standard.) I can play nothing else without sheet music. I very much think that this is the natural state for essentially every musician who has not spent thousands upon thousands of hours practising in a general way.

In my activities on The Student Room, a student forum, someone (let’s call em Entity, because I like that word) recently asked me about the following question. Isaac places some counters onto the squares of an 8 by 8 chessboard so that there is at most one counter in each of the 64 squares. Determine, with justiﬁcation, the maximum number that he can place without having ﬁve or more counters in the same row, or in the same column, or on either of the two long diagonals.

Recently, a friend re-introduced me to the joys of the nonogram (variously known as “hanjie” or “griddler”). I was first shown these about ten years ago, I think, because they appeared in The Times. When The Times stopped printing them, I forgot about them for a long time, until two years ago, or thereabouts, I tried these on a website. I find the process much more satisfying on paper with a pencil than on computer, so I gave them up again and forgot about them again.

I have seen many glowing reviews of Soylent, and many vitriolic naturalistic arguments against it. What I have not really seen is a proper collection of credible reasons why you might not want to try Soylent (that is, reasons which do not boil down to “it’s not natural, therefore Soylent is bad” or “food is great, therefore Soylent is bad”). Soylent is untested. Indeed, there are apparently trials being run, but I have not seen any data coming out of them (or indeed any evidence of a trial, other than the founder’s word).

This comes up quite frequently, but I’ve been stuck for an easy memory-friendly way to do this. I trawled through the 1A Vectors and Matrices course notes, and found the following mechanical proof. (It’s not a discovery-proof - I looked it up.) Lemma Let \(A\) be a symmetric matrix. Then any eigenvectors corresponding to different eigenvalues are orthonormal. (This is a very standard fact that is probably hammered very hard into your head if you have ever studied maths post-secondary-school.

This is part of what has become a series on discovering some fairly basic mathematical results, and/or discovering their proofs. It’s mostly intended so that I start finding the results intuitive - having once found a proof myself, I hope to be able to reproduce it without too much effort in the exam. Statement of the theorem Sylvester’s Law of Inertia states that given a quadratic form \(A\) on a real finite-dimensional vector space \(V\), there is a diagonal matrix \(D\), with entries \(( 1_1,1_2,\dots,1_p, -1_1, -1_2, \dots, -1_q, 0,0,\dots,0 )\), to which \(A\) is congruent; moreover, \(p\) and \(q\) are the same however we transform \(A\) into this diagonal form.

Prof Körner told us during the IB Metric and Topological Spaces course that the real meat of the course (indeed, its hardest theorem) was “a metric space is sequentially compact iff it is compact”. At the moment, all I remember of this result is that one direction requires Lebesgue’s lemma (whose statement I don’t remember) and that the other direction is quite easy. I’m going to try and discover a proof - I’ll be honest when I have to look things up.

This is to detail a much easier proof (at least, I find it so) of Cayley-Hamilton than the ones which appear on the Wikipedia page. It only applies in the case of complex vector spaces; most of the post is taken up with a proof of a lemma about complex matrices that is very useful in many contexts. The idea is as follows: given an arbitrary square matrix, upper-triangularise it (looking at it in basis \(B\)).

As part of the recent series on how I approach maths problems, I give another one here (question 14 on the Maths Tripos IB 2007 paper 4). The question is: Show that a compact metric space has a countable dense subset. This is intuitively clear if we go by our favourite examples of metric spaces (namely \(\mathbb{R}^n\), the discrete metric and the indiscrete metric). Indeed, in \(\mathbb{R}^n\), which isn’t even compact, we have the rationals (so the theorem doesn’t give a necessary condition, only a sufficient one); in the indiscrete metric, any singleton \({x }\) is dense (since the only closed non-empty set is the whole space); in the discrete metric, where every set is open, we can’t possibly be compact unless the space is finite, so that’s why the theorem doesn’t hold for a topology with so many sets.

This post is to be a list of conformal mappings, so that I can get better at answering questions like “Find a conformal mapping from <this domain> to <this domain>”. The following Mathematica code is rough-and-ready, but it is designed to demonstrate where a given region goes under a given transformation. whereRegionGoes[f_, pred_, xrange_, yrange_] := whereRegionGoes[f, pred, xrange, yrange] = With[{xlist = Join[{x}, xrange], ylist = Join[{y}, yrange]}, ListPlot[ [email protected] Through[{Re, Im}[ f /@ (#[[1]] + #[[2]] I & /@ Select[Flatten[Table[{x, y}, xlist, ylist], 1], With[{z = #[[1]] + I #[[2]]}, pred[z]] &])]]]] Möbius maps - these are of the form \(z \mapsto \dfrac{az+b}{c z+d}\).

I’m running through my Analysis proofs, trying to work out which ones are genuinely hard and which follow straightforwardly from my general knowledge base. I don’t find the Heine-Borel Theorem “easy” enough that I can even forget its statement and still prove it (like [I can with the Contraction Mapping Theorem][2]), but it turns out to be easy in the sense that it follows simply from all the theorems I already know.

A little while ago I set myself the exercise of stating and proving the Contraction Mapping Theorem. It turned out that I mis-stated it in three different aspects (“contraction”, “non-empty” and “complete”), but I was able to correct the statement because there were several points in the proof where it was very natural to do a certain thing (and where that thing turned out to rely on a correct statement of the theorem).

It has been commented to me that it’s quite hard to find out (on the Internet) what different games involve. For instance, Agricola is a game about farming (and that’s easy to find out), but what you actually do while playing it is not easy to discover. Here, then, is a brief overview of some games. Agricola Agricola is a game in which you control a farm, and are aiming to make your farm thrive.

I feel that I can write a sonnet well. While sonnets are an easy thing to spout, It’s really hard to write a villanelle. By rhyming, any story I can tell: in couplets, rhyme and rhythm evens out. I feel that I can write a sonnet well. But alternately-structured verse is hell. The poet struggles, juggles words about: It’s really hard to write a villanelle. Enthusiasm’s difficult to quell. An acolyte of Shakespeare, I’m devout:

The aim of this post is twofold: to find out whether a certain mental habit of mine is common, and to draw parallels between that habit and the writing of essays. I don’t know whether this is common or not, but when I’m feeling particularly not-alert (for instance, when I’m nearly asleep, or while I’m doing routine tasks like cooking), I sometimes accidentally latch onto a topic and mentally explain it to myself, as if I were teaching it to the Ancient Greeks (who, naturally, speak English).

Recently, I was browsing the Wolfram Community forum, and I came across the following question: What are the symbols @, #, / in Mathematica? I remember that grasping the basics of functional programming took me quite a lot of mental effort (well worth it, I think!) so here is my attempt at a guide to the process. In Mathematica, there are only two things you can work with: the Symbol and the Atom.

A long time ago, in a galaxy far far away, I completed Myst III: Exile. It’s a stupendously good puzzle game. For some reason, it popped into my mind again a couple of days ago. This post contains very hefty spoilers for that game (it will completely ruin the ending - I will be discussing information-exchange protocols which are key to completing it), so if you’re ever going to play it, don’t read this post yet.

Once upon a time, before this bountiful age of Matter and Light, there was only the Fell. A single being, surrounded by Chaos, content to remain alone forever (for it did not know what a “friend” was). It had not the power to shape the Chaos; neither had it the inclination, for it needed nothing and had no desires. For seething unchanging aeons, it persisted. Then Chaos bore new fruit. A single electron, a point source of charge.

I am shortly to receive a new Nexus 5. I am determined not to become a smartphone zombie, and so I hereby commit to the following Charter. I will keep my phone free of social networking apps, and I will ensure that I do not know the passwords to access their web interfaces. While they can be really quite handy, they are usually simply a distraction. People are used to the fact that I am present on the Internet only when I have my computer with me; there’s no need for that to change.