HomeContactNewsAbout

Welcome to my page!

Last update: 2018-10-23

This website is an experiment in coding in CSS and HTML and minimal amount of JavaScript in attempt to make a responsive design. A page like this needs some content, so below, I describe few of my interests in Engineering and Technology (for you recruiters). If there is enough time, then this site will eventually turn into a blogging platform.

HPC, Super Computing and Other Forms of Computing

Distributed Computing is when we have a system of computers networked together to solve a common problem. The computers are placed in different locations where each computer can be seen as a node. The distributed nodes may all be placed in the same room or they may be placed in separate locations far from each others. The computers may also be any combination of two mentioned cases. The computer casings just need to be separate, the computers work on solving a common problem, then we have Distributed Computing. With Distributed Computing one strives to obtain scalability with the nodes in administration, size, functionality and geography. By administrative scalability one strives for the ability for an increasing or decreasing number of organizations or users to easily share a single distributed system. An example of administrative scalability is when to a new school year a university easily provides computer resources (such as e-mail accounts, log-in accounts, terminals etc) to new first-year students as well as the university removes graduates from their distributed system. By scalability in size we mean we can easily add and remove nodes to the distributed system. An example of scalability in size in use is when filesharing users, who act as the nodes, join or leave the torrent network. By functional scalability one strives to upgrade or add new functions to the distributed system with minimal effort. An example of functional scalability in use is when developers add new apps to Facebook. By geographical scalability we mean that when expanding the distributed system by adding new nodes the increased performance of the distributed system has little impact from the location of the new nodes. By obvious reasons, geographical scalability can sometimes be hard to maintain, especially if distance is a bottle neck. Pretend we are in Frankfurt to fetch content from London and then want to fetch the same amount of content from Beijing, the content to and from Beijing is filtered through the Great Firewall of China causes delays. Two good examples of Distributed Computing systems are automatic teller machines and Big Data clusters. Distributed Computing is interesting to me for two main reasons, it deals with resource management and with performance in computing.

Cloud Computing is shared pools of configurable computer system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. It is not unusual for Cloud Computing services use techniques of Distributed Computing. Three very popular cloud services from corporate giants are AWS from Amazon, Azure from Microsoft and Google Cloud from Google. Cloud Computing have become very popular lately where customers supposedly rent resources at reduced cost compared to owning and maintaining the equipment. Myself, I do not at this point find Cloud Computing particularly interesting due to the fact that I find Cloud Computing services overpriced and I already have ability in maintaining computers (under Linux). Out there, there are plenty of articles how to set up mining rigs in cloud servers and I am sure they all cost more than the value they generate. However, the underlying mechanism in Cloud Computing, which is Distributed Computing, is obviously interesting to me. There may be other factors, such as lack of hardware resources, capital and competent manpower, that forces customers to subscribe to "overpriced" Cloud Computing services. Other factors are such as uptime reliability, reliability of maintenance and speed of service. Legal regulations or internal policies may force one to use Cloud Computing to reduce harm from equipment failure, theft, burn outs, fires, natural disasters and catastrophes.

High-Performance Computing (HPC) integrates mathematics and technology. Here I will mention three subjects; supercomputing, numerical analysis and Parallel Computing. A distributed network of computers can be used as a HPC solution. If the distributed network of computers are located in the same room then possibly we have a supercomputer. The supercomputer Blue Gene/P from IBM runs 164,000 processor cores and its system is grouped into 40 rack cabinets as can be seen in the picture.


Blue Gene/P

Numerical analysis is the art of being able to

Performing measurements, collecting, representing, transforming and presenting data is not analysis in mathematical sense (but it can however be analysis in statistical sense, confusing?). Most laymen, i.e. non-engineers and non-mathematicians, are not aware of the meaning of (mathematical) analysis. Analysis, in mathematical sense, deals with problem formulation and then modeling and solving the problem by use of limits and sequences. A biologist measuring the walk speed of caterpillars is not performing (mathematical) analysis, she is actually performing measurements and collecting sample data. The engineer studying and modeling the walk of a caterpillar is performing analysis. Believe it or not, it is sometimes possible to complete mathematical analysis without any existence of sample data as input. (See here: Gedankenexperiment). The main difference between numerical analysis and mathematical analysis is that in numerical analysis one usually deals with even further simplified and approximated models fit for use in computers, see (2). So, if you as recruiter listen to me, I may use the word analysis when you should interpret it as modeling. To me, the mathematical analysis is fun part of a job while performing measurements is the boring part. So, to speak, within analysis we have white collar and blue collar tasks.


Little Gene

Parallel Computing is the art of running computations more or less simultaneously in a multi-cores or multi-processor environment. To distribute a problem to several processors requires great deal of management. Clearly, the supercomputer Blue Gene/P with 164,000 processors is not used for running 1 process at the time to keep other 159,999 processors idle. It is more beneficial to run as many processors simultaneously as possible which unfortunately brings many new complexities, such as how to redistribute and organize the work. As a hobby, I have built myself a HPC system by using old computer parts from 2007. This little rig is called the Little Gene, see the photo. My hobby - Playing with Little Gene - taught me a lot.

Financial Mathematics and Computational Finance

Top

Finance
Even still during the '90s, economist laughed at engineers claiming they can do jobs in finance better. Some researchers had figured out how to price European call options during 1973, they were Fisher Black, Myron Scholes and Robert Merton. The mathematics behind the work of the researchers is rigorous and cannot be taught to people without high skills in mathematics. In the very beginning, professors from Royal Institute of Technology (KTH, Sweden) earned fortunes from the time existing mispricing. Pricing financial contracts and their hedging has been and still is the rocket science of finance. The underlying mathematics is difficult enough to require engineers and mathematicians to have a Ph. D. degree to fully understand the mechanics behind pricing and hedging assets. Even today, we lack computer resources to perform advanced computations in finance. I have great interest in risk analysis and quantitative finance since they involve methods of simulation and solving equations just as used in aerospace engineering. A few of these mathematical methods are Monte Carlo simulations and use of finite differences or finite elements.

Risk
Instead of volatility I will use the term noise. Most financial models, equations and formulas are derived by use the normal distribution approximating noise. Fortunately the normal distribution has many very pleasant properties making it easy to handle and work with. Because of the pleasant properties, the normal distribution is the favorite distribution among many mathematicians, engineers and economists. The Nobel Prize (1997) winning research by Black-Scholes and Merton for pricing and hedging European call options has itself the normal distribution as noise. Unfortunately, there are many cases where modeling with the normal distribution as noise is not good enough, especially when it comes to anomalous behavior. Scholes and Merton worked in a company, Long-Term Capital Management, which got liquidated in 1998 since they did not pay enough attention to risks in their investments. The years passed and then we had another global financial crisis in 2008 which proved us again the danger in stock models using normal distributed noise. It became very evident that risk should be modeled by use of heavy tailed distributions, i.e. anomalies do not follow the normal distribution. But there is a huge set-back with improving the stock model: To change distribution, to model with another distribution than the normal distribution, makes many of the derived and used models in finance dysfunctional and invalid. This due to the pleasant properties of the normal distribution are lost when changing to another type of noise. So modeling noise without the normal distribution takes us back closer to square one. From the financial crisis in 2008 a new occupation emerged - the Risk Analyst. Even the best of the best (such as Nobel prize winners) need time to time sit on the pot reflecting over their models. What defines a winner?

Big Data, Machine Learning and Robotics

Top

Control Theory
Today, Machine Learning, drones and Big Data is hype. The mathematics behind controlling a drone is old and so is some technology for self-driving cars too. My thesis is based on teachings from Control Theory, which is a branch of Mathematics, developed in the Space Race between USA and Soviet with purpose to build the first manned space ship to land on the Moon. Now about 50 years later, we failed several unmanned expeditions to Mars but we did not fail the manned flight to moon 50 years ago. So, never underestimate the power of analysis. By use of Control Theory we can compute values, as controls, to reach a goal or target by minimizing a cost and the cost can be measured as time, in monetary terms or as sum of errors etc. For example Control Theory can give the answer on how much thrust each propeller in a drone needs for flying a certain path to a given goal.

Control Theory can be used in lot more applications than controlling vehicles, it may for example be used for answering questions of type "Which X should be used to get answer Y?" (a so called inverse problem). Part of my Master's project was to address the problems and motivate solutions to where Long-Term Capital Management, by Scholes and Merton, failed. In my thesis I had volatility smile as X and a very few known market prices of an European call option as Y. What this means, is that I found market controls X (in this case volatility in time and space) from on how the market price Y to an option. My idea was to use the normal distribution as noise but with increasing volatility to match fat tails, i.e. yielding volatility plots at any time instant resembles the shapes of a smile. My thesis was a very difficult, due to divergence issues, but finally successful attempt to preserve the pleasant properties of the normal distribution and still, within a limited space domain, have resemblence of fat tails.

Machine Learning is when computers can learn by themselves without help from humen, well, there may be some guidance from humen. Is technology really going forward in full phase? Evidently, first there are decades of silence for known technology, then the technology gets hyped and basic methods are sold at high price. When technology gets hyped, it may appear as new, which it is not always the case. My impression is that it is the big corporations that move decades old technology forward to public access. The big corporations may not have the best technology but they do know how to dominate the market besides it is mostly them who have funds and willingness to move technology forward.

From where does Machine Learning stem from? Very few know that AI and Machine Learning has its stem from military research on subversion, brainwashing and mind control. The Canadian psychologist Donald Hebb, father of neuropsychology and neural networks, presented already in 1949 Hebbian learning (long before the existence of computers!) and has according to authors of the book Sensory Deprivation: A Symposium Held at Harvard Medical School (1961) written:

"The work that we have done at McGill University began, actually, with the problem of brainwashing. We were not permitted to say so in the first publishing.... The chief impetus, of course, was the dismay at the kind of "confessions" being produced at the Russian Communist trials. "Brainwashing" was a term that came a little later, applied to Chinese procedures. We did not know what the Russian procedures were, but it seemed that they were producing some peculiar changes of attitude. How? One possible factor was perceptual isolation and we concentrated on that."

Big Data uses techniques of Distributed Computing as its backbone and one existing Big Data application is from '60s, its the US surveillance program ECHELON. With Big Data we reefer to the study of data sets too large and complex for traditional software to deal with. One could see Big Data technology as a subset of the technology in Distributed Computing since Distributed Computing requires shared data sets and distributed data flows. Another example, is Facebook, having billions of users, but still sending a message within Facebook is instantaneous. They have solved problems dealing with fast transmission of data and massively distributed data storage. By combining a world wide Big Data lake, Machine Learning in Distributed Computing network, using crypto currency mining technology and finally adding in some computer self-awareness we get a step closer to get our SkyNet. Maybe the most reasonable missing link to SkyNet would be to chip humen become cyborgs?

Blockchain and Digital Currencies

Top

Blockchains can be used in many application and especially in those where history needs to be immutable and preserved. For example in the blockchain of Bitcoin, every transaction is preserved since the very first transaction from Satoshi Nakamoto to Hal Finney. By the way, Satoshi Nakamoto might be the acronym for SAmsung, TOSHIba, NAKAmichi and MOTOroloa. To me, one of the great interests with blockchain is to store digital contracts, such as house ownership and to store other legal agreements. By use of Blockchain two parties may enter an agreement which may be public or kept private for parties holding the keys. The blockchain is also very useful for keeping accounting records. Most people think Bitcoin is anonymous but it is not, there are several cases where law enforcement managed to track down Bitcoin users. The purpose with Bitcoin is not anonymity as people believe, merely the main purpose is, that nobody can be rejected to receive a transaction. As an example, all WikiLeaks bank accounts were frozen and other payment processors such as PayPal followed up freezing WikiLeaks. This kind of behavior, freezing someones account, is impossible in the Bitcoin network due to Bitcoin has no central authority neither any single point of failure for exploitation.

Digital currencies has been of interest for me since 2003 but first in 2010 I bought the domain name eskrona.com. At first I thought of eKrona as domain name but found it too Swenglish and ugly sounding so instead I chose the name Eskrona derived from the words Escrow and Krona. Later in 2013 I reserved the domain electroreserve.com in case I get some day the idea to start a project for designing smart contracts. With regulations and standards and with enough fungibility digital currencies will in the end replace cash. Looking at Ethereum (released 2015) it has decentralized computing capability. Ethereum uses decentralized Distributed Computing just as other crypto currencies. When performing work for the distributed network it is called mining. From the Ethereum website we can read:

Ethereum is a decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third-party interference. These apps run on a custom built blockchain, an enormously powerful shared global infrastructure that can move value around and represent the ownership of property. This enables developers to create markets, store registries of debts or promises, move funds in accordance with instructions given long in the past (like a will or a futures contract) and many other things that have not been invented yet, all without a middleman or counterparty risk.

What do we get if we combine all mentioned technologies together? It should be a SkyNet of Banking. Considering people in media and people in power, world is nothing but a sandbox. By AI we can remove politicians and their corruption friends for making world a better place.

...

Top

(Include some topic I did not think of)
(we live in a simulation?)

Who am I?

Top


A soul is trapped in this body.

(put some links here)

My CV and Links

Top

Welcome to my LinkedIn.

(Here I will put a link for an automatic CV-mailer).