# Sum say “Don’t sell your soul to consultancy”…

Environmental consultancy was never a career path I had considered, but then again, never say never. This changed when I managed to dip my toes in the consultancy waters as part of an ACCE placement scheme when I travelled to Long Island, NY, to work with Lev Ginzburg and Nick Friedenberg at Applied Biomathematics. Despite there being differences with how work is conducted during my PhD, much to my surprise, the similarities with consultancy are a plenty.

As an ACCE student I have the option to take a placement of up to 3 months working with another organisation – preferably outside of academia – to widen our career perceptions. There is, however, a catch, any work conducted cannot be used in your thesis – and it’s a good one. It’s a chance to show yourself that the skills you are learning can be used elsewhere outside of the lab – or office in my case. So, I applied to take my skills to the USA to see exactly what I could do in the consultancy world. Applied Biomathematics is a small company that develops mathematical and statistical software, as well as general consultancy in environmental and ecological predictions.

So, what was I working on? As a bit of background, the United States is one of the biggest producers of corn – often turned into corn syrup – which is grown predominately in an area known as the “Corn Belt”. Roughly, this is the area beginning East-West, from Lake Erie to the Rockies, and North-South, from Lake Superior to the Deep South…it’s immense! To hammer home the point, I lived in Iowa during an exchange while I was an undergraduate; to travel from Chicago to Iowa city takes 4 hours, after about 30 minutes you leave the greater Chicago area when the first corn field is met, after which there is nothing but corn until Iowa City — and it doesn’t stop there. So why am I banging on about the size and intensity of the agriculture? Well, it’s obvious that this has enormous economic significance, and a finer point is the interest in anything which may threaten this. Enter the other interested party; an army of invertebrates which can’t believe their luck! To get around this issue, the United States Dept. of Agriculture (USDA) have embraced GM crops in the form of Bt corn. This is corn seeds which are coated in a pesticide which also imbues the corn plant with toxins fatal to the pests. But this isn’t job done, the pests fight back by evolving resistance, and so begins the inevitable evolutionary arms race. Many producers, such as Monsanto and DOW, have come up with strategies to help counteract evolution, and one in particular is known as “pest refuges”. The idea is that 75% of the field is planted with Bt corn, the remining 25% is untreated. The concept being that the pests will flock to the untreated corn and not develop resistance, any that do will breed with those that have no resistance and keep up the efficacy of the pesticide – or so the theory goes. In addition to the usage of Bt corn, the USDA requires a 10-year resistance plan, that is, to sell the product, the suppliers must ensure that resistance doesn’t develop over 10 years. This is where Applied Biomathematics comes in, suppliers of Bt have contracted them to predict the absolute rate of resistance – a monumental task indeed. They are currently in the process of developing software which looks to address this. They have developed a model which considers pest demographics and genetics, transgenic trait characteristics, the spatial structure of treated-untreated fields, the dispersal of the pests, and resistance management. My role in this task was designing a more efficient dispersal algorithm than was previously being used. The pre-existing algorithm took approx. 1500 seconds to calculate 1 day of movement – not much movement in the grand scheme of things – whereas after my contribution I managed to reduce this to 35 seconds, a 420% increase in efficiency. It should be said that one of the main reasons I could do so was because of the regular geometry of fields in the Corn Belt — things are always easier when dealing with rectangles. Despite the incredible achievements this project has accomplished, the situation on the ground is difficult to say the least. The assumptions of the model are not entirely realistic due to one somewhat unpredictable factor. Farmers. The human element here can make predictions unreliable; take for example the farmer who thinks “refuge? No thanks! I’ll plant the whole field this way, I need the money to stay afloat”; or another who says “25%? I’ll plant this in a part of the field that is less fertile since it’ll be eaten anyway”. These factors are very important for accurate predictions, but the farmer is thinking about his livelihood, not the long-term resistance of the pests. A bag of Bt corn is around \$15,000, and if a farmer is unwilling to pay, or simply cannot, the overall spatial pattern is interrupted which can potentially increase the rate of resistance. Farmers aside, the pests themselves also evolve strategies to work around Bt corn, often farmers will rotate corn with soy to crash the pest population — they dislike soy — but diapause evolves and they hatch the next season ready for the corn… These are some examples of the challenges within that project, and I think you’ll agree that it’s deeply fascinating.

Insect resistance management is not the only project Applied Biomathematics is involved in, two more they are currently involved with are: collision prediction models between Golden Eagles and wind turbines; and collaboration of a review of over 400 species for assessment under the Endangered Species Act. The former is looking to improve collision models for energy companies seeking to build wind turbines. The United States Fish and Wildlife Service approve permits for eagle fatalities, that is, a limit on how many can be struck without incurring a hefty fine. If the company breaches this limit, then they are subject to a fine, which can be so steep it could bankrupt the often-small energy firms. The rate of permit approval is increasing due to collision prediction models which reward developers for more hours logged surveying the area – in theory a permit can be granted by having someone stand with binoculars for a long time. These current models can over- and under-predict where Golden Eagles are present because population sizes aren’t currently included like they are in current European models. This runs the risk of building wind farms where they shouldn’t, and refraining from building where they should. Applied Biomathematics is developing species distribution models that project the abundance of breeding and non-territorial individuals onto the landscape, this way sites can be considered without collecting new population data. It is also a challenge collecting the data they require because the field surveyors keep their data hidden. Once they hear an electricity company is involved they fear nest sites will be removed so construction can begin. The second project works in conjunction with the Electric Power Research Institute which is a non-profit research group funded by electricity companies in the USA. An environmental group agreed a settlement with the Fish and Wildlife Service to review over 400 species for assessment under the Endangered Species Act. A deadline has been given for the species assessment which has utility companies and regulators scrambling to conform to the nearly impossible timeline to produce assessments — which is when an army of consultants are contracted. Despite being a mainly mathematical consultancy, Applied Biomathematics has conducted a series of studies that required: a light dusting of theory; heavy reviews of literature; law; policy; and current practises to help suggest ways to get the best results out of conservation plans that address multiple species.

This has been a brief, if on the long side, summary of my time working in the consultancy sector. My first opinions of consultancy were common among others I have spoken to; that consultancy is an extortion racket preying on unsuspecting victims. This couldn’t be further from the truth. This myth seems to come from the “Golden Rule” of consultancy: however much you think it will cost, triple it. A premise not so unfamiliar in academic circles, but instead applied to time. This rule of thumb is true, but there is a good reason for it. In reality, the client approaches the consultant unaware how much a project will cost – which is why they approach the consultant in the first place, they’re not the expert – and, in truth, neither does the consultant. Any complex problem requires time and resources invested before a coherent attempt at an answer is provided. By quoting three times as much as what the consultant anticipates relieves the burden of asking for more funding if problems are encountered, or if the analysis takes longer than anticipated. If the project comes in under budget, well that’s money kept for a rainy day when contracts may be scarce. If there is an overspend, then more funding must be requested, which can reduce the client’s faith in the consultant. It actually makes a lot of sense in that regard. I also used to think that consultancy was very dry and wouldn’t utilise my skills. Where I got this idea from I have no idea, but it was there, and is probably true for many others. The working environment was almost identical to my daily PhD work, but the questions being asked were markedly different. In academic projects your work is mainly exploratory, one can investigate interesting phenomena as it’s encountered; whereas in consultancy the work is direct, where satisfying the client is the motivating factor. Funding is also similar, rather than applying for grants, the consultant is on the look-out for contracts. So, I can say that the only real differences that I found between consultancy and academia were that: questions are asked in a different way, and the focus on exploratory vs. focused work is almost polarised, albeit with some inevitable overlap.

I found the whole experience incredibly rewarding; not only was I able to gain a different scientific perspective, I also had a break from PhD work which gave me time to think about concepts and ideas.  I managed to meet some wonderful people that offered me their gracious hospitality and gained a few friends along the way. There was also the additional benefit of living 2 hours outside of New York City, which allowed me to visit one of the most interesting places in the world. While I was there I managed to see no less than 4 Broadway shows, I highly recommend Jersey Boys. Long Island is a beautiful place and I will look on my time there with fond memories. To sign off I will leave a message from Nick Friedenberg to other PhD students contemplating consultancy:

“In my experience, people in industry and regulatory agencies are commonly saddled with problems just beyond the scope of their training. This is true for academics as well and it is the sort thing that keeps us interested in our work. However, outside of academia is a world of limitations that make learning new things much harder. The analytical skills you are learning, particularly when they combine mathematical techniques with a mastery of metaphor, prepare you to fill the role of collaborator with non-academic scientists and engineers. Pursuing such roles can be done as an academic, but do not overlook opportunities in the private sector. Here at Applied Biomathematics, we have seen some people come through like post docs, returning to university positions after three years. Others have gone on to other consultancies or into agency work. And a select few stayed, kept up their academic network, published regularly, and returned to academics as full professors. In short, the private sector needs you, will make you feel valuable, can help you flourish as a scientist, and is not a black hole.”

Sum say “Don’t sell your soul to consultancy”…

I say “One person’s devil is another person’s salvation”

# Sum say we live in a computer simulation…

This idea has jumped into the mainstream media recently — albeit a while ago, this post has been lying in a file for months now, sorry! — because of something Elon Musk – of SpaceX and Tesla Motors, but more so for his plans to colonise Mars – said at Code Conference in California in June last year. Since his comments there have been dribs and drabs from the media reporting on this, the Independent, The Guardian, and the BBC to name some of the mainstream news outlets. Additionally, this hasn’t escaped the entertainment industry either with Sky’s “Westworld” series based on the film of the same name – without spoilers, this is in principle, along the same lines. This is an idea that was the subject of a 2003 paper by Swedish born philosopher Nick Borstrom titled, “Are you living in a computer simulation?”. This idea certainly cannot be credited to him, as anyone who has ever played a computer game – the Sims anyone? – will know. Many a time after playing for hours I also had these thoughts, what if someone is controlling me in a Sega game? I was a fan of Sonic the Hedgehog and the Megadrive.  In fact, one doesn’t even require a computer simulation, this is just the most recent incarnation of a question as old as thought itself; is what I am experiencing reality? Many philosophers throughout history have toyed with this devilish little question; from Plato’s Cave, of shadows projected onto walls; to Descartes Evil Demon twisting his senses; to Gilbert Hartman’s Brain in a Jar being stimulated with electrodes; and now with a 21st century twist, the universe as we know it is simulated by a computer. This time on the merry-go-round there are some important differences — as well similarities — with the previous incarnations, as well as tying in with another ancient question: is there an omnipotent being responsible for creation; is there a God?

Let’s examine this concept and the arguments given by Borstrom for a simulated universe.  He provides some assumptions that must be made, some I take issue with, but that is not the point of the exercise. Assuming that human technology continues its development following the current exponential trend, humans will have the computational power to simulate a model of the universe. More importantly for us, that is every single interaction within the brain that allows us to be human. After all, from what we know about the brain, it is nothing more than a densely-connected network relaying signals incredibly quickly. It’s not a stretch of the imagination to say this wouldn’t be possible in the future, or even now, the internet is rapidly “evolving”. Borstrom provides some estimates for the number of operations within the human brain per second, which if we look at the current population of the globe, the number of operations per second is in the order of . This number of operations per second would paralyse the best of today’s computers, even without real-time simulation, and the fact that many these operations are not independent. However, for an advanced civilisation that has transcended our current limits, the estimates for planetary sized computers the number operations per second is a whopping  computations per second – more than enough for humans anyway. Other theoretical upper bounds on computation have be estimated at  operations per second — cited in Borstrom’s paper. Our current physical limitations are following what is known as “Moore’s Law” – roughly, that the number of transistors in an integrated circuit doubles every 2 years. If One thinks about this, then it doesn’t take long to conclude that space is the limiting factor – getting around this is “easy”, build a bigger computer! Now provided we have such awesome power at our disposal, we can come to the philosophical conclusions of this thought experiment:

1. The human species will become extinct before ever reaching the necessary advancements to conduct such simulations.
2. Should humans ever reach an age of technological advancement that such simulations could be conducted they are not interested in running them.
3. The opposite of proposition 2, that advanced humans are interested in running such simulations and that we are, in fact, living within one at this very moment in time.

So how do we arrive each conclusion? Well this is where Borstrom’s original paper gets very specific, and we should take a little time to set the scene for his argument. He imagines that civilisations such as ours reach a “posthuman” stage, when we cast off the shackles of our current evolutionary limits and open the technological doors necessary to run the powerful simulations required. Not only does such a civilisation exist, but also that they would be interested in running “ancestor simulations” – simulations of their ancient past – for whatever reason.  So how do we reach the three logical conclusions stated above? Let us take the proportion of all human-level technological civilisations that survive to the posthuman stage, $f_{p}$, of which only some are interested in running them, $f_{i}$ . We then multiply these by the average number of simulations run by such a civilisation, $\overline{N}$ . This is then multiplied by the average number of individuals that have lived in a civilisation before it reaches post-human stage, $\overline{H}$ – this is to account for everyone who could possibly be simulated.  We then take this product and divide by the sum all those accounted for by simulations, and the average number of individuals that have lived in a civilisation before it reaches post-human stage –essentially everyone. This last division step acts as a normalisation to give us the proportion of individuals living in a simulation, $f_{sim}$ . This is then mathematically expressed as

$f_{sim}=\frac{f_{p}f_{i}\overline{N}\overline{H}}{\big(f_{p}f_{i}\overline{N}\overline{H}\big) + \overline{H}}$,

which can be simplified to

$f_{sim}=\frac{f_{p}f_{i}\overline{N}}{f_{p}f_{i}\overline{N} + 1}$.

From here we can deduce the three conclusions given above. We can assume that, because the civilisation is as advanced as previously discussed,  would be very large.  If this is the case, then the three conclusions are condensed to:

1. $f_{p}\approx 0 \implies f_{sim}\approx 0$.
2. $f_{i}\approx 0 \implies f_{sim}\approx 0$.
3. $f_{sim}\approx 1$.

This is, by and large, the crux of Borstrom’s entire argument. As $f_{p}$ increases – and it should with time – then it should imply that $f_{i}$ increases with it, which is where the conclusion that we are almost certainly all subjects within a huge simulation comes from.

If you were somewhat confused by this argument you are not alone. The problem with this probability argument is that it is mathematically misleading – or incorrect. This fraction is described in Borstrum’s paper as the “actual fraction of observers with human type experiences that live in simulations”, when the units of the fraction are in number of simulations. This does not make sense, fractions do not have units. The definitions given are cryptic, and herein lies the problem; by applying Occam’s razor a little too diligently one can risk also shaving the clarity. Let us start again with the following definitions:

1. $P :=$ total number of non-simulated individuals.
2. $f_{p} :=$ proportion of those that advance to a posthuman stage.
3. $f_{i}:=$ proportion of individuals interested in running ancestor simulations.
4. $N :=$ average number of simulations run by the interested individuals.
5. $S :=$ average number of simulated individuals per simulation.

Then the proportion of simulated individuals is given as

$f_{sim}=\frac{Pf_{p}f_{i}NS}{\big(Pf_{p}f_{i}NS\big) + P}$,

which draws the same conclusion, but with corrected units. I believe this is easier to understand than the original derivation given by Borstrom. The above relation contains parameters that he had excluded – probably because he assumed they were implicitly defined, but the definitions do not allude this.

This argument is reminiscent of the Drake Equation for how many planets support intelligent life and our chances of contact with them. When Frank Drake first proposed his equation, it caused a discussion about what assumptions were made — the chink in many a models armour.  I’d like to discuss more than Borstrom’s initial set up which I find rather restrictive. He claims that this would be a posthuman species, which I find isn’t a necessary requirement. Sure, it makes sense since we are the only intelligent beings that we know of, but if we include the Drake equation too, then it could be any civilisation, which only makes the case more likely.

### Evidence for this?

Let us imagine that the argument is, in fact, true and we do live in a simulation. Should we then expect to see some evidence for this? Some believe there already is, the quantum world –the realm within physics for the treatment of the very small — for example. Within quantum mechanics there is particle-wave duality, the theory – very successful I might add – that waves, which are often properties within a continuum, also behave like particles, which by definition are discrete. The basic premise behind quantum mechanics as evidence for a simulation comes from the same principle as a picture; be that developed from film, printed on paper, or on a computer screen. When examined very closely we see the individual pixels, which is where this argument is going. Perhaps the natural world is also pixelated, that at the very small scale we reach a finite resolution, as a computer would. Another aspect which ties into the discrete argument is rules, which are also finite. The natural world is very well described by mathematics, too well it may seem. Why is there an inherent mathematical explanation of the world around us? Or more aptly, why is the world so structured? The fact there is an undeniably spooky level of structure in the world has always been at the back of the minds of physicists throughout history. Why might this be? Is it happy coincidence? Or is it because the programming code that the universe is coded in demands this? If we have been coded, then we should expect some errors to be found. Now we know that the language of the universe is mathematics, then surely the programmer has made errors. How would these manifest? Well, it so happens that mathematics does indeed have errors; we call them logical paradoxes. This is when mathematics tells us something must be true, when in fact, we know it to be nonsense. I’d like to draw your attention to the “Two Envelope Problem”, this is very similar to the “Monty Hall Problem” where a choice of three doors with a reward behind one is offered. You select which door you want, after which an incorrect option is removed and you are asked if you want to swap doors. If you know your probability theory then you’ll know to swap, since swapping this increases your probability of being correct from 1/3 to 1/2. However, in the two-envelope problem, swapping should increase your probability of reward every time you swap. This is obvious fallacy, but mathematics determines that this should be true, why? Because we are living in a simulation? There are many of these paradoxes that some would argue are evidence of errors in the source code of our simulated universe. It’s an interesting thought.

Despite this, I am not entirely convinced we are. If we look at the structure of the universe, then no one can deny that it is very odd the universe adheres to it very rigidly, but it isn’t evidence of a simulation. The reason for this is exactly the same reason that many atheists do not believe in God. If we are in a simulation, then who created those simulating us? Their universe must understandably be very much like ours, otherwise how would it function? (If any of the physical constants were even a fraction off the whole universe couldn’t exist – The Anthropic Principle rears it’s head). Therefore, our existence must be based on some other founding existence, which then also must be mathematically structured like ours.  I also find that the discrete nature of the universe as evidence is similarly lacking. Our understanding of the universe, although remarkable, is very much in its adultescence. Something that has puzzled me is the obsession that the universe is continuous, why does this have to be the case? Arguably the Theory of Relativity requires space-time to be continuous but until there is unification of the small and large — quantum mechanics describes the very small, relativity the very large — then nothing should be taken for granted. After all, the mathematical definition of a continuum relies on discrete particles behaving like one when there are enough of them – it is a hypothesis. We as humans also cannot escape the discrete world; we see discretely as a series of flickering images; we think discretely as a series of firing neurons; why then is the rest of the universe not discrete? This is a topic for another time, but to clarify, the only reason we use continuous equations, historically, is due to the difficulty of calculating all the individual interactions of individual particles; modern computers are now making this task easier.

And what if conclusive evidence was found? It would be the most important discovery in history; and that is where it would end, as a discovery. If it were discovered, then no deeper questions could feasibly be asked, such as “why are we being simulated?” Or “If we are simulated, what would other simulations be examining?” These are questions that could never possibly be answered from inside the simulation. To illustrate this point, I conduct simulations of organisms that have only 3 traits, they differ in different simulations to determine how fast they invade landscapes. Let’s also imagine they are sentient, if they discovered they were actually living in program I have written, then there is no possible way they could know why they existed. The scientific process would almost certainly end at discovery, unless of course, our progress was the subject of the simulation; which is an arrogant assumption. If we do live in a simulation, we may not actually be the focus of the experiment, if it is indeed an experiment. The simulation may be a game — like “the Sims” –  or we may be a tiny consequence of something much bigger, like a single thread in a medieval tapestry.

### Why does this matter?

To be short: it doesn’t. Well perhaps not, Borstrom does tackle the blissful ignorance issue, but I won’t go into this — you can read this yourself in the original article. If we live in a simulation then nothing changes; your life, and mines, will continues as it always has. If we discover this to be true, then what are the consequences? People would have to come to terms with it, and move on with their lives believing it or not. I would even hazard a guess that people would ignore it; blissful ignorance. Some would say that we must appease our simulators – if they are even interested in us –like the pagan religions of old. If there were a chance that the plug would be pulled, then I imagine we wouldn’t know anything about it. These arguments tie in with the age-old question: “Is there a God?”; Or a higher being pulling the strings behind the scenes? but they don’t really solve the question; is there? Another issue often discussed is whether we can break out of the simulation. To do so would be incredibly difficult, if not impossible. One could perhaps save a copy of oneself, but that isn’t breaking out so much as preserving oneself – the digital equivalent of cryopreserving the body. The out-of-simulation world may be so different that the mental strain would be too much – it’s often tough enough in our own world, never mind outside it. The “red pill, blue pill” situation from “The Matrix” comes to mind. To look at this another way, we may not even be able to comprehend the “real” world. If I go back to my simulated organisms and brought them out into our world, how would I begin to explain hydrogen to them? There is a nice video from “Science in the Bath” about perception of colour (link below). In this video, there is colour palette of green squares which look identical, but this isn’t true, there is an outlier which I – and most – cannot see. However, if shown to a member of the Himba tribe in Namibia, they can spot it straight away. The kicker here is that when that tribesperson was shown the same palette with a blue square in amongst the green, they struggle to find it. This is all down to language and how we perceive the world around us, to go out with our own perceived world and into the simulators world would almost certainly provide a similar confusion.

This question is mainly philosophical with very little input from science. To gain evidence is almost impossible, that is if we could even comprehend it. So, I can say that, until future advancement, this topic should remain within the field of philosophy.  I feel that the simulation argument is a case of the Emperor’s new clothes; fun to think about, but nothing new.

Sum say we live in a simulation.

I say, “The jury is out”

I was too lazy to properly reference, but here is the material cited within the post:

http://www.bbc.co.uk/earth/story/20160901-we-might-live-in-a-computer-program-but-it-may-not-matter

https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix

https://www.theguardian.com/technology/audio/2016/dec/23/constructed-consciousness-are-we-living-in-computer-simulation-tech-podcast

https://www.scientificamerican.com/article/are-we-living-in-a-computer-simulation/

http://www.simulation-argument.com/simulation.html – Original Paper by Nick Borstrom.

https://en.wikipedia.org/wiki/Drake_equation – The Drake Equation in Wikipedia

https://en.wikipedia.org/wiki/Monty_Hall_problem – The Monty Hall Problem in Wikipedia

https://en.wikipedia.org/wiki/Two_envelopes_problem – The Two Envelopes Problem in Wikipedia

https://www.youtube.com/watch?v=7RLCXO85Mpg – Science in the Bath episode.