The End of Computer Science

Alex Songe

End of History

The end of history is a political and philosophical concept that supposes that a particular political, economic, or social system may develop that would constitute the end-point of humanity's sociocultural evolution and the final form of human government.

 

Authors who wrote about the end of history: Thomas More in Utopia, Georg Wilhelm Friedrich Hegel, Karl Marx,[1] Vladimir Solovyov, Alexandre Kojève[2] and Francis Fukuyama

Every End of History is a failure so far

But the effort is useful for clarifying what you believe about humanity

End of Comp. Sci.

Computers and programs are formal systems that augment human reasoning

This implies that programs are about meaning just as much as formal systems

(and that they aren't just mathematically/aesthetically interesting and expensive space heaters)

What are we doing when we do Comp. Sci?

A clue in double-entry accounting with Arabic numerals: we use a formal system or representation to create or describe some kind of reality (state of the world)

 

We create a representation, apply the computation, and look to the resulting representation

What are we doing when we do Comp. Sci?

Formal correctness is the fittingness of the resulting representation to the original representation described by what the transformation was meant to do

This is opposed to the view that programs are only composed of instructions

Even actual instructions can be only symbolic: modern CPUs rewrite machine instructions with microcode...it's turtles all the way down.

What are we doing when we do Comp. Sci?

Excel is almost certainly the most popularly successful programming paradigm

Authors can create spreadsheets using an accessible and visual representation of the world that can transform itself into another representation that can be corresponded back into the world in the same way they gave input

Other programming environments and languages provide different ways of describing the world, but are much more laborious

Meaning is inescapable

Meaning hides itself implicitly even inside many formalisms that some consider free from meaning

Even CPU instructions convey a symbolic meaning, as modern CPUs will rewrite instructions and maintain correctness

Meaning is inescapable

Once that door is open, a whole host of philosophical concepts emerge:

Semantics What stuff means

Ontology What stuff exists

Idioms How we talk about stuff

Values What stuff is important

Which quickly also include:

Politics Other people and stuff

Ethics Whether to (not) do stuff

Now What?

Meaning transforms some of what we think about when we program, and they reveal new questions and strategies:

- Whose fault are bugs? If a formal language is unintuitive, is it a bad language?

- Can we get rid of bugs by separating the description of the world from the implementation of the algorithm?

- What features of a language are important to solve this problem?

Learn from Richard Feynman: Be sensitive to when you are actually confused vs when you just make a mistake

Now What?

- When coming to pick software and PLs, values will play a huge role

- There's always tradeoffs to make, though not all of these are necessary tradeoffs:

See Bryan Cantrill Platform as a reflection of values: https://vimeo.com/230142234

  • Approachability vs Expressiveness
  • Performance vs Reasoningness
  • Transparency/Debuggability vs Security
  • Safety vs Performance
  • Stability vs Velocity
  • Extensibility vs Simplicity

Now What?

Expand your mind with PLs that are different from what you already know (Java/C++/C#/Python/Ruby are almost identical)

- Declarative/Query: SQL, Datalog, GraphQL

- Category Theory (better types): Ocaml, Haskell, Idris

- Lazy Evaluation: Haskell

- Logic/Relational: Prolog, miniKanren, etc

- Distributed: Lasp, Deadalus

 

"7 ((More )?Languages|Databases|Concurrency Models) in 7 Weeks" books are great for changing the way you think

Another level

- What if we consider users part of the software system?

- Users have independent models of reality in their heads

- They interact with the system, that often has several *levels* of models of reality: (UI state, connection back to server, the server's model(s), and then often a database model of reality as well)

- Including the user in the model is currently transforming the Comp. Sci. security, including manipulating how users behave (ie, studies that show that password change policies are less secure)

- And yet another level to the understanding: Programmers are users of programming languages

What about AI?

- Machine Learning is probably the biggest weakness to the meaning-centric Comp. Sci. hypothesis

- Dunno if this is a weakness of AI or Meaning hypothesis

- A big problem with AI algorithms is fitness functions are not a substitute for meaning and its role in understanding or predicting the behavior a software system

Instead of AI

- Consider again the developer is a user

- User still has to pose the original reality to the machine

- Innovations in PLs and technology are about removing cognitive load from the PL user (declarative programming, static typing, type inference, promises/futures vs multithreading, immutability, Rust's memory ownership, etc)

- The more a language makes something your responsibility, the more likely you are to make a mistake

- The tradeoff is more automation shrinks what a valid program in that system is, making some programs impossible as it makes some problems impossible

Resources

- 7 in 7 books: https://pragprog.com/categories/7in7

- >code podcast https://www.greaterthancode.com/

 

Bryan Cantrill Talks

 

- Software Values https://vimeo.com/230142234

- OSS Governance https://youtu.be/-zRN7XLCRhc

- Tech Leadership https://youtu.be/9QMGAtxUlAc

The End of Computer Science

By asonge

The End of Computer Science

  • 291