Thursday 28 August 2014

The Limitations of Conventional Programming Languages


Under the heading Trends in Programming Dr Geoffrey Sharman, chair of the British Computer Society Advanced Programming Specialist Group, sums up the current trends in programming and developing applications. He writes:
    Overall, programming languages have been relatively stable for several decades. Almost all modern languages are derived originally from Algol and, more directly, from C.
    While there continues to be development of existing languages such as C++ and Java, and of new languages such as Python, Ruby and Groovy, these are recognisable as incremental improvements on an existing paradigm, rather than new paradigms, and therefore exploit widely available programming skills. Notable exceptions are COBOL and FORTRAN, which are firmly established in particular industries, but also stable providing that skills are maintained.
    Similarly, programming tools such as compilers, interpreters and debuggers have improved over many years. The introduction of integrated development environments (IDEs) just over a decade ago provided a significant increase in programming productivity, which continues to be improved year on year.
    No other technologies are in sight that might offer significant productivity increases and, therefore, current attention is focussed on ‘agile’ development methodologies, which seek to offer shortened development cycles and increased confidence in the outcomes of development projects. 
    For the most part, these methods are based on iterative development techniques in which a subset of function can be demonstrated early in a development project and then reviewed against user needs, and enhanced or refined as the project progresses. The success of these techniques is based primarily on refining specifications rather than the development process itself. In other words, answering the question ‘am I developing the right thing?’ rather than ‘am I developing the thing right?’ ... ...

Basically little has really changed since I retired in 1988. The real problem, which has not been tackled relates to the fact that modern computers are black boxes. If a user is running a black box system of any kind they have a serious (and in some cases catastrophic) problem when something goes wrong, because they do not know what has gone wrong or how to correct it. This means that every effort has to be made to ensure that the black box always works correctly - and the more complex the task the harder it is to pre-define and implement every possibility. The article suggests that what are now needed are not better programming languages but better ways of specifying the task and ensuring that the program does what the task requires.

No one appears to have realised that the fundamental problem results in having a black box. What is really needed is a white box system where the user can work with symbiotically with the automated system. Of course things can still go wrong but now the user can see what is wrong and take appropriate remedial actions. As CODIL is a preliminary attempt to build a white box computer I decided to write the following letter in reply to Dr. Sherman's article.
     As a long retired Fellow of the Society I read about the comparative lack of progress during the years since my retirement, and I am not really very surprised. The conventional rule based programming approach lacks the flexibility of the human mind and has problems with the messier aspects of the real world. An analogy with the railways of Victorian times illustrates the problem. Both railway lines and programs need to be planned in advance and only when they have been built can “fare-paying customers” (goods/passengers in the case of trains, data for programs) use the systems. Both are prone to considerable disruption if faults occur in key places, and both are unable to cater for low volume non-standard “journeys” (which do not justify the up-front building costs) and unpredictable real world events. Many bigger and more successful computer systems work because people are more flexible and change their behaviour when offered a limited but very much cheaper service – moving to live in houses built near railway stations in late Victorian times, and using hole-in-the-wall banking today.
     However there are many problems where there are very hard to fully pre-define requirements and where low volume and unpredictable requirements cannot be ignored, We still read of projects in such areas running into trouble. Medical records are a good example. They involve active participation of many people to gather the data, which relates to the real life problems of many people who each have an assortment of medical issues. At the same time medical advances lead to changes in our understanding of the diseases, new ways of monitoring the patients, new drugs and medical experiments, and problems such as the development of drug resistance.
     I have recently been looking back into the relevant computing history. Many of the early experimental programming languages got squeezed out in the rush to develop better conventional programming tools and one of the “lost” languages seems of particular interest in this context. CODIL (COntext Dependent Information Language) was conceived as the symbolic assembly language of a radically new human-friendly “white box” computer architecture, as an alternative to the human-unfriendly Von Neumann “black box” computer. The research was triggered by a study of the 1967 sales accounting package of one of the biggest commercial computer users, Shell Mex & BP, at a time when many of the sales contracts had been drawn up in pre-computer days. The initial research work into CODIL was financially supported by the LEO pioneers, David Caminer and John Pinkerton, but was axed when the old LEO research labs were closed and ICL was formed. A short talk on the first preliminary research was given to the Advanced Programming Group 45 years ago, and several papers were later published in the Computer Journal describing work with a simulator, as no hardware was ever built.
     A re-examination of the CODIL project papers suggests that the real reason for its failure was that the research concentrated on looking into the possibility of producing a competitive computer package and failed to do any essential unrushed blue sky research into why it worked!
    My current assessment is that the CODIL approach represented an alternative mathematical model of information processing to the “Universal Machine” approach of the conventional stored program computer. Instead of a top down rule based approach which uses numbers to represent instructions, addresses and data, within a precisely defined mathematical framework, CODIL takes a bottom up approach using recursive sets rather than numbers as the basic storage unit and makes no formal distinction between program and data. It uses associative addressing and automatically compares patterns to find and fill up “gaps” in incomplete patterns. It appears that the approach could be implemented on a simple neural network and work done 40 or more years ago may prove to be relevant to understanding how the brain works.
     Of course further examination may show that the CODIL approach is not the answer to building complex human-friendly open-ended systems but its very existence could indicate that there are other interesting research gems which were lost in the mad rat race in the early days of computing to capitalise on the market potential of this new invention. 

No comments:

Post a Comment