Best products from r/haskell

We found 60 comments on r/haskell discussing the most recommended products. We ran sentiment analysis on each of these comments to determine how redditors feel about different products. We found 66 products and ranked them based on the amount of positive reactions they received. Here are the top 20.

Top comments mentioning products on r/haskell:

u/pron98 · 1 pointr/haskell

>Why would biological organisational structures necessarily be best?

I don't think they necessarily are, and perhaps I'm reading Kay too charitably, but biological systems have found solutions to problems we are still struggling to solve with computers (resilience, maintenance). I don't agree with Kay (if that's what he means) that if a piece of software is different from a biological system then the software must be doing something wrong, but I do agree that it is worthwhile to consider how life solves problems that technology hasn't yet. But even then, I totally agree with you that it is very likely that life's solutions may not be applicable to technology, because many of the solutions life takes are based on brute-force or extreme redundancy, things that are, at least currently, materially prohibitive for technology. Also, I think that one of the things those who encourage looking to biology for inspiration often miss is that life's goals are different from a computer. As Hamilton and Price taught us, as a computational system life is a single machine whose "goal" (or what the algorithm optimizes for) is not the survival of the individual but of the gene (the so called "selfish" gene). When we write software, it is very much the survival of the individual which is the goal. Although, maybe not? Maybe the goal is the survival of the cluster, which is made of individuals who share the same "genes" and so "selfish altruism" is a good inspiration?

As to the question of evolution and local minima, I think that's a problem in theory but not in practice. Some of life's solutions are well beyond what our technology is currently capable of achieving no matter what we try, and it will be some time until we can think of doing things better. Evolution is not optimal, but it seems to have done many things better than we have so far managed to, although, I guess, partly because it has access to nano-manufacturing and nano-machines while we don't yet. When we do, maybe we'll be able to surpass evolution's design.

BTW, completely tangential but while we're on the subject I very much encourage you to read Stuart Kauffman's At Home in the Universe, which claims that natural selection is not the only force at play, and there are interesting computational designs that arise naturally under some conditions which are very favorable in the universe. He sees those designs as a sort of a fourth law of thermodynamics. He's the one who, in 1969, proposed the study of a fascinating computational model called boolean networks. Incidentally, synchronicity turns out to have a profound impact on the behavior of boolean networks, but that seems like a minor technical issue, and it seems like we can assume synchronicity as a mathematical abstraction even when the physical implementation isn't exactly synchronous.

> ML modules are just elaborate static machinery on existential types, which are abstract types.

I did not know that, nor do I know what existential types are...

> OOP has no monopoly on modular programming.

I didn't say it does, but I think it is very ungenerous to deny what I think is an indisputable fact: OOP has made programming significantly better, as evidenced by the fact that we've been able to write much more elaborate, more maintainable programs partly thanks to OOP. It is certainly possible that other approaches are even better -- perhaps much better -- but you can't take away OOPs actual achievements. Also, precisely because OOP (or some variations of it) has been used in practice more than approaches that claim superiority, we are simply more aware of where it falls short. We don't know about other approaches' shortcomings as much simply because they haven't been put to the test ("the test" being widespread industry use). It is possible that they have all of the advantages and none of the disadvantages of OOP, but it is also possible that they have other disadvantages.

As to "half-assing", well, I guess that any popular product is a "half-assed" realization of some pure concept, because there are big picture concerns that often necessitate breaking the dream a bit. For example, James Gosling described why he designed Java the way he did by saying that the way he saw it, what people really wanted and needed -- i.e., the things he believed would give the greatest bang-for-the-buck -- were garbage collection and pervasive dynamic linking, but those were things that until that time had only been found in languages that, he says, scared people away. And so he decided to put those most important things in the VM and wrap it in a language that seemed familiar and unthreatening. He intentionally compromised on the language -- which is the UI to the VM, and what people see, and the UI is crucial to adoption -- in order to sell the things he thought are the most important. He called it a wolf in sheep's clothing. And this is why I think Java's design is nothing short of brilliant, but it compromised in order to be successful. Whether a good product that nobody uses is really good or not is a philosophical question, but I think that it's more than fair to see lack of adoption as at least some sort of design failure. A different design may not have ended up being so popular. Now it is true that the people behind Java were Lispers (Gosling and Steele) and Smalltalkers (Bracha), and had they been MLers perhaps the result would have been different or "better". But overall, I don't think Java and ML are so incredibly different (apart from immutability by default, which is huge but I don't think you could have sold that to the masses in 1995; maybe not even now). In some important ways, I think ML is closer to Java than to Haskell.

> I disagree about the value of such things.

I commonly hear this, and all I can say is: read the requirements of an air-traffic control system and see how many of them you could discard to simplify the system. I used to think exactly the same as you until I started working on such systems. I think that a very significant portion of the total industry effort goes into software that solves problems with very high essential complexity. Unfortunately, that part of the industry (which may be the majority) isn't well represented in online forums -- certainly not Haskell forums -- or joint academia-industry conferences.

It is an empirical question (and one which is probably very easy to answer) whether most of the value (measured, say, economically) in software is in small software or in large software (not counting embedded). I am pretty certain that the value is overwhelmingly in large software.

u/binarybana · 2 pointsr/haskell

It sounds like you may want to take the same approach you mentioned in your C++ code, but in a functional way. I'd recommend taking a look at some of Bird's functional pearls such as [1] or look in his book [2] for a great variety, as he often takes the approach of: "okay, lets start with the obvious and (often) intractable approach of generating all possible graphs, and then filter that list." Then he goes one step at a time to generate the solutions more efficiently by taking advantage of the properties of the problem until he often arrives at quite efficient (and elegant) solutions.

Haskell's laziness can be a great benefit here as Flarelocke mentioned: you only generate functions as your filter evaluation function needs them. So the essence of your program might be: filter graphPass efficientGenerateGraph.

I also recommend you look at the Inductive graph library [3] as it is a much more mature graph library as evidenced by Ivan's wonderful supporting libraries such as graphalyze [4], graphviz, . It took me a bit of a learning curve to wrap my head around it, but once you get it (just ignore all the monadic interfaces at first for instance) then it is a joy to work with.

To see some 'real' code that uses fgl then you can look at my code which does some computational biology inference/modeling using it. I've tried to document the code, but I could have done a better job. [5]

[1] - http://www.cs.tufts.edu/~nr/cs257/archive/richard-bird/sudoku.pdf

[2] - http://www.amazon.com/Pearls-Functional-Algorithm-Design-Richard/dp/0521513383

[3] - http://hackage.haskell.org/package/fgl

[4] - http://hackage.haskell.org/package/Graphalyze-0.11.0.0

[5] - https://bitbucket.org/binarybana/grn-pathways

u/wibbly-wobbly · 13 pointsr/haskell

I'm a theorist, so my book recommendations probably reflect that. That said, it sounds like you want to get a bit more into the theory.

As much as I love Awodey, I htink that Abstract and Concrete Categories: The Joy of Cats is just as good, and is only $21, $12 used.

Another vote for Pierce, especially Software Foundations. It's probably the best book currently around to teach dependent types, certainly the best book for Coq that has any popularity. You can even download it for free. I recommend getting the source code files and working along with them inline.

I will say that I don't think Basic Category Theory for the Working Computer Scientist is very good.

Real World Haskell is a great book on Haskell programming as a practice.

Glynn Winskel's book The Formal Semantics of Programming Languages is probably the best intro book to programming language theory, and is a staple of graduate introduction to programming languages courses.

If you can get through these, you'll be in shape to start reading papers rather than books. Oleg's papers are always a great way to blow your mind.

u/edwardkmett · 19 pointsr/haskell

Types and Programming Languages by Benjamin Pierce covers type theory, and systems of type inference that we can have, and the ones we can't and why.

Pearls of Functional Algorithm Design by Richard Bird covers how to think equationally about code. It is probably the best guide out there on how to "think" like a Haskeller. Not directly about a discipline of math you can apply, but the mindset is invaluable.

Wadler's original papers on monads are probably when they finally clicked for me.

The original Idiom paper is also a golden resource for understanding the motivation behind applicatives.

Jeremy Gibbons' The Essence of the Iterator Pattern motivates Traversable, which so nicely encapsulates what folks meant by mapM over the years.

Uustalu and Vene's The Essence of Dataflow Programming captures a first glimmer of how and why you might want to use a comonad, but it can be fairly hard reading.

Awodey's Category Theory is probably the best general purpose category theory text book.

For folks weak on the math side Lawvere and Schanuel's Conceptual Mathematics can be used to bootstrap up to Awodey and provides a lot of drill for the areas it covers.

Dan Piponi's blog is excellent and largely set the tone for my own explorations into Haskell.

For lenses the material is a bit more sparse. The best theoretical work in this space I can point you to is by Mike Johnson and Bob Rosebrugh. (Pretty much anything in the last few papers linked at Michael's publication page at Macquarie will do to get started). I have a video out there as well from New York Haskell. SPJ has a much more gentle introduction on Skills Matter's website. You need to signup there to watch it though.

For comonads you may get some benefit out of my site comonad.com and the stuff I have up on FP Complete, but you'll need to dig back a ways.

u/gregK · 4 pointsr/haskell

>For this reason, if we want to convey the usefulness of FP to the imperatives amongst us, we need to focus on elegant solutions to real world inputs and outputs. We need to show how our I/O-libraries, often overlooked or thought of as mere “helpers”, are superior in efficiency and leads to cleaner and more maintainable code.

There are plently of articles and blog posts on the web on how to do IO, Effects, etc in haskell. I am all for learning to do practiacal stuff like writing to a file. BUT learning to get the most of pure funtions would probably yield the biggest wins in terms of code quality and ability to reason about it. Having worked for most of my career with imperative langauges, getting the most out of FP requires a whole new skill set. It's not enough to understand the concepts, you need to practice and read a lot of good functional code.

That's why I love books like Pearls of Functional Algorithm Design. We need more books and articles like that. Just the sudoku solver in there is an eye opener.

u/negativezero11 · 3 pointsr/haskell

It's quite pricey, but I recommend
Algebra for Programming by Richard Bird and Oege de Moor. The second chapter has a brief but lucid introduction to category theory insofar as it applies to a functional language like Haskell. It doesn't cover everything like adjunctions, but it's a great start (definitely easier than Awodey unless you already know a ton of algebra). The bonus here is that you can download solutions to some exercises as well!

You may also enjoy Computational Category Theory, although the examples are in Standard ML.

u/mightybyte · 16 pointsr/haskell

I actually had this exact discussion today. A number of people argue that type classes must have laws. I definitely share the general sentiment that it is better for type classes to have laws. But the extreme view that ALL type classes should have laws is just that...extreme. Type classes like Default are useful because they make life easier. They reduce cognitive overload by providing a standardized name to use when you encounter a concept. Good uniformly applied names have a host of benefits (see Domain-Driven Design for more on this topic). They save you the time and effort of thinking up a name to use when you're creating a new instance and also avoids the need to hunt for the name when you want to use an instance. It also lets you build generic operations that can work across multiple data types with less overhead. The example of this that I was discussing today was a similar type class we ended up calling Humanizable. The semantics here are that we frequently need to get a domain specific representation of things for human consumption. This is different from Default, Show, Pretty, Formattable, etc. The existence of the type class immediately solves a problem that developers on this project will encounter over and over again, so I think it's a perfectly reasonable application of a useful tool that we have at our disposal.

EDIT: People love to demonize Default for being lawless, but I have heard one idea (not originally mine) for a law we might use for Default: def will not change its meaning between releases. This is actually a useful technique for making an API more stable. Instead of exporting field accessors and a data constructor, export a Default instance and lenses. This way you can add a field to your data type without any backwards-incompatible changes.

u/nbksndf · 6 pointsr/haskell

Category theory is not easy to get into, and you have to learn quite a bit and use it for stuff in order to retain a decent understanding.

The best book for an introduction I have read is:

Algebra (http://www.amazon.com/Algebra-Chelsea-Publishing-Saunders-Lane/dp/0821816462/ref=sr_1_1?ie=UTF8&qid=1453926037&sr=8-1&keywords=algebra+maclane)

For more advanced stuff, and to secure the understanding better I recommend this book:

Topoi - The Categorical Analysis of Logic (http://www.amazon.com/Topoi-Categorial-Analysis-Logic-Mathematics/dp/0486450260/ref=sr_1_1?ie=UTF8&qid=1453926180&sr=8-1&keywords=topoi)

Both of these books build up from the basics, but a basic understanding of set theory, category theory, and logic is recommended for the second book.

For type theory and lambda calculus I have found the following book to be the best:

Type Theory and Formal Proof - An Introduction (http://www.amazon.com/Type-Theory-Formal-Proof-Introduction/dp/110703650X/ref=sr_1_2?ie=UTF8&qid=1453926270&sr=8-2&keywords=type+theory)

The first half of the book goes over lambda calculus, the fundamentals of type theory and the lambda cube. This is a great introduction because it doesn't go deep into proofs or implementation details.

u/agentultra · 4 pointsr/haskell

If it's a web application try putting a Haskell process in front of it. The Haskell process can run the legacy server in a sub-process, holding a lock if necessary, while it proxies requests straight through to the legacy application. Write all of your tests against the Haskell code and build a good layer of unit and integration tests. As you gain confidence in the test suite slowly replace code paths into the legacy code with a Haskell module that does the same thing. Wash, rinse, repeat.

The benefit to this is that the legacy code gets under test and becomes maintainable. You can start a re-write of the code base while shipping new features. If your team wants to back out of Haskell they haven't lost anything. And if your team enjoys working with Haskell it can really improve morale.

https://www.amazon.ca/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052 has plenty of good strategies for dealing with the situation you're in.

Maybe down the road your team will start to see the benefits of Haskell but I would focus on being pragmatic. I have received better results by showing people how I've used Haskell to solve problems rather than telling them why they should be using Haskell. Even if the people in your audience are skeptical they should at least see that you're getting some value out of it. That can be a compelling enough story to get them interested.

u/tel · 19 pointsr/haskell

Something like

  • Documentation has a wider goal than just "documenting", it must transition a novice user to an expert
  • To do this you must do more than annotate, you must teach
  • Types, tests, readable source, etc all mystify the beginner—while they have a purpose, they do not serve the total goal
  • Function-level documentation is great, but it's just one piece of the whole too
  • Community-driven documentation without an owner sucks. You need a voice and a guiding principle
  • Teaching is about empathy—your documentation should exude empathy for the novice

    Then there's a breakdown and guide

  • Good documentation comes in, perhaps, four parts
    • First Contact assumes little base knowledge and answers "what is this?" and "why do I care?"
    • It also describes "what's next?"
    • The Black Triangle is a step-by-step guide that takes a user who has decided that they do care to the point of operating the library, simply
    • Get your user using as fast as possible
    • The Hairball is a largeish breakdown of all the things someone must know, each paragraph nudging the novice toward greater understanding bird-by-bird
    • The Reference is support documentation for experts
u/winterkoninkje · 1 pointr/haskell

Except that Mac Lane is only good if you are, indeed, a working mathematician.

A good introduction is Pierce, though it doesn't get into gritty details. For some of the grittier details, and for those on a budget, Adámek is freely available and is a good reference. However it definitely requires active reading and working through the examples, not just sitting down with a hot beverage.

Once you have the basics down, Google is your friend. A lot of folks in the Haskell community have blogs talking about various things.

u/CKoenig · 6 pointsr/haskell

the "vanilla" books are IMO quite boring to read - especially when you don't know more than Set/Functions.

but I really enjoy P. Aluffi; Algebra: Chapter 0 that builds up algebra using CT from the go instead of after all the work

----

remark I don't know if this will really help you understanding Haskell (I doubt it a bit) but it's a worthy intellectual endeavor all in itself and you can put on a knowing smile whenever you hear those horrible words after

u/gfixler · 3 pointsr/haskell

>When working in Java you just need to embrace it.

Haha. Agreed. When you're a hostage, just do what they say, and live to fight another day.

>...showed me how it's supposed to be done.

I've tried to see how it's supposed to be done many times, but it's just a broken abstraction for me. If I want to turn off a light, I flip the switch to off. In OOP, I'm supposed to create a Light class to hold the state of everything related to the light, then accessor methods with access control levels set up just so to protect me from the world, in case anyone wants to make something based on my whole lighting setup. Then I need to create nouns to shepherd my verbs around, like LightSwitchToggleAccessor, and worry about interfaces and implementations and design patterns.

In Haskell I'd say "A light can just be on or off; let's make it an alias for a boolean."

type Light = Bool

I want to be able to turn it on and off; that's just a morphism from Light state to Light state.

toggleLight :: Light -> Light
toggleLight = not

And that's it. If I realize later that I don't want Light and Bool to be interchangeable, I'd just make Light it's own type with a simple tweak to give it its own two states:

data Light = Lit | Unlit

And change the toggle to match:

toggleLight :: Light -> Light
toggleLight Lit = Unlit
toggleLight Unlit = Lit

Then I could toggle a big list of lights:

map toggleLight [light1, light2, mainLight, ...]

Or turn them all on:

map (const Lit) [light1, light2, ...]

I have equational reasoning. I can do like-for-like transformations. I get all the goodness of category theoretic abstractions, giving me reusability the likes of which I've never seen in OOP (not even close). Etc.

>objects are closures

Closures are immutable (hence the glory of this). Objects tend to be mutable, which is a nightmare (every day where I work in C#).

>try to keep as much stuff pure as possible

But you just have no way of knowing what's pure and what isn't in any of the OOP environments I've seen, and it is so obvious in C# at work; it plagues us constantly - new bugs daily, and projects always slow tremendously as they grow, and things become unchangeable, because they're too ossified. Just that small thing, that need to specify effects in your types, makes it so much easier to reason about what actually goes on in a function. For example, my Lights up there actually can't do anything in the world. I know that because of their "Light -> Light" types. All they can do is tweak data, the same way every single time they're called - you can replace them with table lookups. They'd have to get some kind of IO markup in their types before they could change anything, which is part of that equational, deterministic reasoning that makes FP so easy to understand, even as projects grow.

I don't want to try to do things. I want it to be fun to do what's good, and impossible to do what's bad. The goal of a great type system is to "make illegal states impossible to represent." I made it impossible to mess with the world, and so I can know with 100% certainty what toggleLights does. I quite literally cannot know what the same function would do in C#. It could return a different result every time. Multiply that up to a few 100klocs, and I have no idea how our projects work, and no idea what I'm breaking when I push commits (and I often break things, and everyone else constantly breaks my stuff, because we can't properly reason about anything).

u/apfelmus · 4 pointsr/haskell

In my opinion, "Algebra of Programming" is really a book about understanding optimization algorithms like dynamic programming, greedy algorithms, divide-and-conquer etc. in a unified manner, guided by category theory.

In other words, it is intended to be applied to problems like [linear paragraph formatting][1] or counting word numbers, though the style is a lot more abstract in the book. This is extremely interesting stuff, but also a little niche.

I can't say anything about Elements of programming, because I have never heard of this book.

If you want a more down-to-earth version of "Algebra of Programming", I would recommend Richard Bird's [Pearls of functional algorithm design][3]. It covers different material, but Bird has a very mathematical/structured approach to programming that is definitely worth learning from.

[3]: http://www.amazon.com/Pearls-Functional-Algorithm-Design-Richard/dp/0521513383

[1]: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.7923

u/ephrion · 3 pointsr/haskell

I don't think type theory is what you're looking for. Type theory (and programming language theory) are mostly interesting from the perspective of a language designer or implementer. If you're just looking to upgrade your Haskell skills, then focusing on specific libraries and techniques will be faster.

With that said, here's my type theory track:

  • Type Theory and Formal Proof is a fantastic introduction to the lambda calculus, starting with the simple untyped lambda calculus and exploring each of it's additions. It's very friendly to novices, and includes a guide to Greek letters and an introduction to sequent notation (the weird horizontal bar logical notation). Ultimately, it develops the Calculus of Constructions with Definitions which can be used to prove things using constructivist logic.
  • Types and Programming Languages is a good read after that. It also starts with the untyped lambda calculus, and develops extensions on top of it. You'll implement a type checker and interpreter for the simply typed lambda calculus, and you'll add subtyping, record types, type inference, objects (!!!), parametric polymorphism, and existential types.
u/biglambda · 12 pointsr/haskell

I highly recommend The Haskell School of Expression by the late great Paul Hudak. Also you should learn as much as you can about Lambda Calculus in general like for example this paper.
After that you should learn as much as you can about types, Types and Programming Languages is really important for that.
Finally don't skip the important fundamental texts, mainly Structure and Interpretation of Computer Programs and the original video lectures by the authors (about the nerdiest thing you will ever watch ;)

u/bjzaba · 3 pointsr/haskell

Type theory is a different, though related field to category theory. I've found this to be an excellent book to start off, they might have it in your library: https://www.amazon.com/Type-Theory-Formal-Proof-Introduction/dp/110703650X - it's less focused on implementing type systems like Pierce's 'Types and Programming Languages' book, so it allows you to ramp up quickly to get to the juicy CoC stuff :)

u/JeffB1517 · 11 pointsr/haskell

Haskell is a really complicated language that demands of a lot. It may not be possible.

Making it more popular though:

As others have mentioned the tooling is complicated. Haskell has the same problem Tex had. Stack and Haskell platform get part of the way there but the installers need to configure editors and project tools to work out of the box fully configured. In particular include a fully configured Leksah or Geanny or Kate.

Finally and this will be controversial. Strip options. There is one easy web framework with a note in the documentation of where to find the full featured but hard one. The database is preconfigured out of the box (SQLite or something), a script for say MySQL and Mongo (single node on desktop) and then a link to how to do it for a real setup. Because the options are simple there can be a simple management tool to make minor changes to the environment.

Then include targeted tutorials for that environment.

Paul Hudak's environment for https://www.amazon.com/Haskell-School-Expression-Functional-Programming/dp/0521644089/
was perfect. It got a Haskell, an editor and enough of an environment to do graphics and sound programming.

Basically Haskell platform got too focused on Haskell libraries and not focused enough on ecoystems. Make a Haskell the way Microsoft, Adobe or Apple would make a Haskell.

u/atium_ · 9 pointsr/haskell

Not what you are asking for really, but you'll get better with experience.


Take a few imperative algorithms and convert them over.
Solve some problems on HackerRank. Do it your way, afterwards compare your solution with some of the other Haskell solutions.


Some functional algorithms and data structures are done very differently. Chris Okasaki has a book Purely Functional Data Structures that covers some (though its for ML)


There are papers/articles on topics such as Functional Binomial Queues and Hinze has got a paper on Priority Search Queues that also covers an implementation of Dijkstra and Prims.


The Haskell Wiki has got a page listing functional pearls. Maybe also take a look at how dynamic programming and such paradigms are done functionally.

For most algorithms you can write it in a imperative manner and use mutation and looping constructs, if you have to. But you aren't going to find some guide to convert any algorithm into idiomatic Haskell. Some functional implementations require you to think differently.

u/FunctionalGopher · 1 pointr/haskell

I've dug into several haskell books and resources online and as crazy as it sounds my favorite book was: :https://www.amazon.com/Haskell-Cookbook-functional-applications-Applicatives-ebook/dp/B073QW9LS3/ref=sr_1_2?ie=UTF8&qid=1527444265&sr=8-2&keywords=Haskell+Packt

It's practical, short, gets you building projects, and has very clean examples and pictures. Very simple and easy to digest explanations.

u/itkovian · 20 pointsr/haskell

I can highly recommend Okasaki's book on data structures: https://www.amazon.com/Purely-Functional-Data-Structures-Okasaki/dp/0521663504, if you are looking for inspiration or techniques.

u/ReinH · 3 pointsr/haskell

Try Bird's Introduction to Functional Programming using Haskell, which this seems to be an update of! One of the best books on FP ever written IMO. And his Pearls of Functional Algorithm Design. And (it's a bit pricey though!) his Algebra of Programming.

u/NLeCompte_functional · 2 pointsr/haskell

I have not read Functional Programming In Scala so I am unsure of the scope.

But Purely Functional Data Structures is a classic: https://www.amazon.com/Purely-Functional-Data-Structures-Okasaki/dp/0521663504


It's largely focused on SML, but all the examples are also given in Haskell. And for learning Haskell (or Scala/F#/Agda/etc), porting the SML examples is a good exercise.

u/co_dh · 3 pointsr/haskell

To answer your general question: To use a parameter more than once, you need to duplicate your parameter like:
dup :: a -> (a, a)
Then you call two functions on each.

Algebra of Programming could help you.

https://www.amazon.ca/Algebra-Programming-Richard-Bird/dp/013507245X

u/eat_those_lemons · 1 pointr/haskell

That makes sense, I guess I am slightly despairing because it is such a huge project and I am a sole developer and because of how much it is all tied together it takes a full day to just add a simple button, and to add all the changes all through the code base

​

Thanks for the a) reassurance and b) the recommendation

​

When you say SICP do you mean this book? https://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871/ref=asc_df_0262510871/

u/emarshall85 · 3 pointsr/haskell

I really liked the 2nd edition, though I found myself growing impatient, having already read LYAH by the time I found that book. Pre-ordered, regardless. Happy birthday to me! Kindle edition was immediately available, so I didn't have to wait for my birthday.

PS - available for pre-order on amazon.com Didn't realize the reddit entry linked to amazon already. Whoops... << walks out quietly >>

u/snatchinvader · 7 pointsr/haskell

A good book describing similar techniques for designing and implementing efficient data structures with lazy evaluation is Purely Functional Data Structures.

u/Lossy · 5 pointsr/haskell

You joke but that style is actually used in The Algebra of Programming

u/kqr · 6 pointsr/haskell

This is a completely unhelpful answer, but if you're looking to get to know the things you listed under not comfortable, there is