Micro Focus is now part of OpenText. Learn more >

You are here

You are here

4 functional programming advances redefining app dev for the cloud

public://webform/writeforus/profile-pictures/croppedprofilepix.jpg
Dean Hallman Founder and CTO, WireSoft
 

Every 10 years or so, another promising new programming methodology comes along that offers a new and sometimes better way to model systems, structure code, and meet the asynchronous and throughput demands our clients and stakeholders expect.

Today, a new crop of programming techniques and methodologies is maturing that promises to once again advance the state of the art in programming practice and methodology. 

But these developments are not occurring uniformly across every technology stack, and it can be easy to miss trends happening in someone else's tool chain.  

Here are four recent advances in functional programming (FP) that have the potential to change how software engineers solve programming problems over the next five to 10 years, and the underlying drivers motivating these advances.

    Advances and the motivations behind them

    But before I get to the advances and why they'll be key, here’s some essential context as to what’s driving them forward. 

    Usually, advances in programming methods don't arise in a vacuum but in response to new requirements. For example, object-oriented programming began with Simula and Smalltalk in the late '60s, early '70s. These languages and accompanying abstractions helped their inventors emulate real-world constructs in code.

    So, what is driving the need for advances in programming methods today? It's always easier to identify and describe trends after they've played out, but I think there are some drivers that are apparent even now.

    Increased parallelism

    As Moore's Law started to lose steam in the late 2000s, processors stopped adding speed and started adding cores. To avoid leaving cores idle, our programming methods, languages, and runtimes have required a refactoring to incorporate concurrency at a fundamental level.

    Increased asynchronicity

    Big data, cloud computing, stream processing, etc. are now using multi-stage extract, transform, and load (ETL) pipelines. These wrangle data from multiple sources asynchronously, and do so across a range of disparate and distributed systems.

    Increased inversion of control

    Inversion of control (IoC) is an important software design principle that has been increasing in scope and popularity in recent years. (See this article for a more in-depth explanation of IoC). From GUIs to serverless computing, and the new Istio Framework, the industry continues to find new ways to expand and apply the principles of IoC. But there's a cost to pervasive IoC.

    IoC frameworks and containers exist because they make it easier to build complex, distributed systems. But IoC also forces most of your code into callbacks, which simplifies some concerns, but complicates others. For example, while we are able to program as if we are the only thread in the process and gain services and security transparently, we are simultaneously robbed of the context of an execution pointer.

    In other words, your program doesn't resume where it left off; it resumes in a callback. This can be problematic for FP, where functional composition requires resuming where you left off within a chain of functions. Your code must build up and retain context between and across callbacks, while remaining side-effect-free as FP demands.

    IoC also brings security implications that are particularly relevant in distributed, cloud-based systems. As Kyle Simpson points out in his blog, when you yield control of some asynchronous activity to another party, you incur a trust relationship with that party. A malicious or unreliable third party could easily betray that trust by calling your callback too soon, too many times, with bad data, or with false errors.

    Today's big programming problems

    As a consequence of the three drivers discussed above, asynchronous concerns are harder to reason about and code, interdependencies and flow control are made more complicated, vendor lock-in has become harder to avoid, and security has become harder to guarantee.

    Finally, dealing with parallelism in software is no longer optional, so it must get easier to account for in code.

    Here's the problem put another way: In today's cloud computing environment, where more context and control of the flow of your application has been wrestled away from "your code," how can you write code that adheres to FP principles, ensures that asynchronous responses are processed securely and in order, leverages concurrency across CPU cores, and codifies interdependencies and flow control in a manner that is simple, readable, and not overly tied to specific vendors?

    The four key advances

    What follows are four FP advances that help to address these challenges. They will better position you to build the next generation of cloud-, microservices-, and stream-based applications.

    The goal is not to teach all these techniques here, but to put these advances on your radar (if they aren't already), and provide you with good citations for following up. These advances aren't necessarily brand new, but they are gaining momentum. Communicating sequential processes, for example, is based on a groundbreaking research paper first published in 1978 by Tony Hoare.

    The list below focuses primarily on JavaScript, since I'm more familiar with that ecosystem. But in many cases, Haskell, Clojure, and other FP-first languages have equivalents.

    Finally, given how many FP frameworks and languages are under active development, I'm sure I'll miss some important advances that should be on the list. If so, I’d love to hear about them in the comments section below.

    1. The 'promise' architecture and derivatives

    The promise, a.k.a. the future, is a groundbreaking abstraction, and it's the minimum bar every functional programmer should understand. A promise is a guarantee of a future value, or at least an explanation if the value could not be obtained.

    Here's a real-world analog. A promise is like getting a ticket number at the DMV. That ticket guarantees you'll either get the driver's license you came for, or perhaps just an explanation that the DMV's photo printer is broken, so no license for you.

    Another important aspect of promises is that they at least partially address the security concerns that IoC via callbacks introduce. Unlike a callback, the resolve and reject functions of a promise guarantee that they can't be called too many times.

    Promises are not only useful in their own right, but also pave the way for several derivative works that are more encouraging than the original. Many languages support promises through third-party frameworks, but JavaScript and Node are really good places to learn about promises and explore alternative implementations. In particular, the Folktale Task and Fluture libraries are good early indicators of how the promise architecture is evolving.

    Promises have also been incorporated or served as a foundation for other asynchronous constructs that have come about in recent years. The two primary examples of that are Promises+Generators and the new Async/Await features introduced in the ES2017 version of JavaScript.

    The Promise Waterfall is another useful construct built on promises. It's an adaptation of the Async library's async.waterfall that uses promises instead of callbacks. A Promise Waterfall is essentially an FP compose function for promises, with specialized error handling. There are many implementations of the Promise Waterfall, including one of my own construction in a library called Pathfinder, which combines the advances discussed here into a single, unified programming model.

    2. Monadic functional programming

    Monads and their ilk are a hot topic in functional programming—and for good reason. But with the exception of Dr. Boolean, few people are able to explain them painlessly, in simple terms. Brian Lonsdorf’s 2016 talk, Oh Composable World!, elucidates the real purpose of monads: using a box.

    In short, the point of monads is: Don't operate directly on a value. Instead, put that value in a dot-chainable box and let the box manipulate its value for you through a standardized and predictable API. Why? Because this indirection buys you greater readability, flexibility, and composability.

    Where monads start to ramp up the learning curve is in the number and diversity of monads, functors, applicatives, lenses, and so on. Getting your head around the full vocabulary takes time. To help with that, the JavaScript community offers FantasyLand, a specification of the common monadic structures and standards for interoperability.

    Several libraries implement part or all of the FantasyLand Monads ensemble, including Monet.js and Sanctuary. The "maybe" and "either" monads in these libraries are a good starting point (after Boolean's talk) in understanding monads and how their use can streamline your code and improve readability.

    Building on the either monad, Scott Wlaschin’s 2014 talk on "Railway-Oriented Programming" illustrates how one can keep error handling from polluting your "happy path" with unwanted complexity.

    The promise architecture is still evolving, and monads are part of that evolution. The issue is that promises, as currently defined in the JavaScript language, are not monadic. Advocates of monadic FP are somewhat critical of promises for this reason. But the promise equivalents in Folktale and Fluture offer good alternatives that are monad- and FantasyLand-compliant and correct promise's error-handling idiosyncrasies, along with other enhancements.

    3. Functional reactive programming and the observable

    Imagine for a moment that you could physically wire up a connection between the controls on a web page so that the search box connected to the table of search results, a counter label showing the number of hits, and a cancel button. With all of these controls wired up, as soon as you type a character in the search box, everything connected to it immediately updates in real time.

    This is how observables work: They are like software circuitry, connecting and flowing inputs to outputs the instant inputs change, and transforming the input values along the way. Each intermediate transformation of input values can return synchronously or asynchronously, and if the user cancels a long-running async operation, the software circuitry can be interrupted and reset.

    Functional reactive programming (FRP) and its observable abstraction is a new and powerful way to solve asynchronous problems, where inputs are contemplated as a set of incoming streams that can be merged, mapped, split, etc.

    In FRP, an observable is analogous to an array, but where an array is a collection of existing values, an observable is an array of future values. And where arrays are iterated to enumerate existing values, observables are subscribed and listen for the arrival of future values.

    RxJS is currently the leading observable implementation, but Bacon.js is a very capable alternative; it seeks to avoid RxJS's distinction between hot and cold observables. Lastly, Pathfinder offers a new take on FRP that builds on promises instead of replacing them and avoids the complexities of flatMap.

    For a clear introduction to FRP, observables, and RxJS, read André Staltz’ "The Introduction to Reactive Programming you've been missing."

    4. The rise of concurrency models and FP abstractions

    Ryan Dahl, the creator of Node.js, said in his original talk unveiling Node.js that "threaded concurrency is a leaky abstraction." In other words, threads do a poor job of abstracting their complexities away from the developer's view.

    Part of the problem with traditional threading models is that they focus the developer's attention on parallelism, which is complex, instead of on concurrency, which is more straightforward. A renewed interest in an old idea, concurrency models, has come about in recent years to address this complexity "leak," as well to keep busy all cores of the newly pervasive multi-core CPUs.

    What's interesting about concurrency models is that they represent a technical evolution beyond threaded concurrency—a kind of post-threads developer experience. The enabling idea behind concurrency models is to wall off activities that are non-overlapping in state and therefore inherently safe to parallelize, and then flow messages between them for coordination. This approach reduces and localizes the points in the system where guards against threading conflicts are necessary.

    FP is complementary to these goals, since it is predicated on the idea of eliminating mutated state. This ensures that most of your code base is thread-safe, so your concurrency model will have fewer points of mutated, shared state to contend with.

    The four main concurrency models are:

    There are other concurrency models, but Reactor, CSP, and Actor are the three in widespread use today. LMAX Disruptor is less known but is showing promise.

    Concurrency models are distinct concepts from the software abstractions used to program them. CSP, for example, requires the specification of co-routines, where Reactor requires the specification of asynchronous callbacks, typically packaged as promises.

    The challenge in both cases is to meet the concurrency model's structural requirements while preserving the attributes of a function programming style. For example, even though Go has closures and first-class functions, its exposure of CSP abstractions doesn't necessitate an FP approach. For that, look to Clojure's core.async and Haskell CHP libraries. Both libraries are Golang-inspired frameworks that demonstrate a fully FP approach to CSP.

    FP challenges abound

    With the rise of multi-core CPUs, technology has finally caught up to the vision of computer scientists Carl Hewitt and Tony Hoare. Their 1973 and 1978 papers, respectively, on actor formalism and communicating sequential processes we now know as Erlang and Golang—or are at least integral to them.

    But these developments of actor and CSP concurrency models aren't isolated innovations; they are part of a larger trend. Higher core count CPUs, cloud-based microservices, big data, streaming ETL, and the increasing level and pervasiveness of IoC are posing new challenges for FP abstractions and practitioners.

    Developments in functional and reactive programming, such as promises, monads, observables, and concurrency models, are gaining traction and evolving to meet these challenges.

    Collectively, these four advances are proving that FP and its foundations—closures, currying, functional composition, and functions as parameters—are our best tools and defenses against rising concurrency, asynchronicity, and IoC requirements. Tomorrow's competitive, cloud-based applications will demand no less.

    Keep learning

    Read more articles about: App Dev & TestingApp Dev