Best products from r/ProgrammingLanguages

We found 27 comments on r/ProgrammingLanguages discussing the most recommended products. We ran sentiment analysis on each of these comments to determine how redditors feel about different products. We found 30 products and ranked them based on the amount of positive reactions they received. Here are the top 20.

Top comments mentioning products on r/ProgrammingLanguages:

u/jdreaver · 72 pointsr/ProgrammingLanguages

Oh wow, I just went down the rabbit hole of CPS, SSA, and ANF while developing my compiler for a strict Haskell-like functional programming language.

I read the outstanding book by Appel on compiling using CPS, and was all ready to go to refactor my pre-LLVM IR to be CPS. Then I did more research and realized that while a number of optimizations are very natural in CPS, compiling CPS to machine code is not as simple. It felt like a really daunting project, and after wrestling with my CPS transformations for about a week I filed a CPS IR away in the "research again someday" bucket.

The best intermediate representation for a functional language I've found is A-Normal Form (ANF). Here is the original paper on the subject. The argument goes that ANF is much more compact and easier to understand than CPS, and still enables almost all of the same optimizations. Some recent work with join points in GHC and a few other papers/theses I read (linked below) convinced me that ANF was going to be my choice of IR.

I highly recommend sticking with LLVM. It is a very mature ecosystem and it gives you so much "for free". I think it's neat that my optimization pipeline will look like:

  1. Core -> Core optimizations
  2. Some small ANF optimizations
  3. Compilation to LLVM where I can have LLVM do some optimizations as well before spitting out machine code

    Even now, I only have some very rudimentary optimizations implemented for ANF, but turning on -O3 when compiling to LLVM makes my toy programs just as fast as equivalent programs I wrote in C. I feel like using LLVM gives you the best of both worlds between ANF and SSA; you hand-write your ANF transformations in your compiler, and let LLVM do the neat things that can be done with SSA optimizations. Note: I am no compiler expert. Maybe I'm being naive in thinking the LLVM optimizations after ANF optimizations give me that much. I'd be happy for someone else to chime in here :)

    Lastly, you mention ease of use and the ability to get started as important criteria. In that case something like ANF to LLVM is the obvious choice.

    Good luck!

    ---

    If anyone is interested, I gathered a lot of resources while researching CPS/ANF/SSA. I'll just dump them here:

    Andrew Appel wrote a book called Compiling with Continuations
    (https://www.amazon.com/Compiling-Continuations-Andrew-W-Appel/dp/052103311X),
    where he explains how continuations can be used as the back end of a compiler.
    Lots of stuff since then has been written on how using continuations makes lots
    of optimizations a lot simpler, and how it is pretty much equivalent to SSA.

    More stuff:

u/oilshell · 1 pointr/ProgrammingLanguages

Well I wasn't suggesting that the compiler infer the right allocation type. I was suggesting that there be tools so that programmers can figure out the allocation types to tell the compiler.

In other words it doesn't seem sufficient to just have a mechanism for the compiler to safely combine different allocation types. I would like some kind of dialogue and feedback with the compiler based on real measurements -- I actually don't want it to be magic.

Most GCs already use a fixed 2- or 3-level policy, e.g. a young generation and an old generation. But once you allow more flexibility than that, you not only have the opportunity for optimization, but also pessimization :)

I mean it depends how big the codebases you're targeting are. If you are thinking of 20K lines of code all written by one programmer, then yes maybe that person can do a great job of choosing the allocation type.

But if you're thinking of 100K or 1M line codebases written by many people, I think the mechanism is only beginning. Consider that C++ allows you tons of knobs to tune performance. Yet in real codebases people are not able to do it.

Here is an example:

https://groups.google.com/a/chromium.org/forum/#!topic/chromium-dev/EUqoIz2iFU4

std::string is responsible for almost half of all allocations in the Chrome browser process; please be careful how you use it! In the course of optimizing SyzyASan performance, the Syzygy team discovered that nearly 25000 (!!) allocations are made for every keystroke in the Omnibox. We've since built some rudimentary memory profiling tools and have found a few issues:

This is in Chrome, one of the most widely used C++ programs out there. It is also highly optimized -- in some components, like v8. The quality of engineers is high; they all know how to write good C++, but when you put them all together the result is really suboptimal.

I have a pessimistic view of software -- one book that exemplifies it is this:

https://www.amazon.com/Systems-Bible-Beginners-Guide-Large/dp/0961825170

e.g. "large systems always operate in degraded mode". I don't think anyone would argue that's not true of the web -- it operates in a perpetual and permanent degraded mode (while being fantastically useful). While I don't understand that much about 3DWeb, the same could be true if it gets large enough.

Anyway this might seem like it's coming out of left field... it's kind of a philosophical critique. The point is that I like the idea of gradual memory management in the abstract -- in fact I have a really inefficient codebase now in OSH that might benefit from it. But I guess from the incomplete picture I have it seemed like a lot the annotations were for single variables? e.g. lex or var? That doesn't seem like it will work well in a large program. Maybe I am misunderstanding.

I also think there is the idea of not devoting too much language real estate to one feature. I mentioned automatic data layout as a related performance optimization that could be just as important, and you might need "room" for that in your new language Cone.

tl;dr:

  • Whenever you look at dynamic measurements of how large programs behave and perform, you get a big surprise. Even if you wrote the program, or even if you paid a lot of money to hire excellent engineers :)
  • Memory management for small programs is sort of a solved problem, either by GC or C++-like programmer choices. But memory management for large programs is very much unsolved. So I guess that is why I am holding GMM up to a high standard.

u/RobertJacobson · 1 pointr/ProgrammingLanguages

Here's my attempt to be helpful!

  • Borrow or buy Simon Peyton Jones' The Implementation of Functional Programming Languages (Amazon, free PDF version).
  • Also read Implementing functional languages: a tutorial, which is a reimagining of the above for use in the classroom.
  • Read through The ZINC Experiment, Xavier Leroy's exposition of his earliest OCaml implementation.
  • I really like the LLVM Kaleidoscope tutorial series. It's not about compiling functional languages. Rather, it implements a compiler in OCaml.
  • I second u/sociopath_in_me's advice to try to tackle Crafting Interpreters again.
  • Check out The Optimal Implementation of Functional Programming Languages by Andrea Asperti and Stefano Guerrini (Amazon). There are PDFs of it all over the internet, but I don't know what its copyright status is.

    Regarding Asperti and Guerrini, there are a few people on this subreddit who are working on cutting edge research compilers for functional languages based on term-rewriting. I've found this subreddit as well as r/Compilers to be very friendly and helpful in general, so I encourage you to take advantage of them. Ask questions, bounce ideas off of people, etc.
u/bjzaba · 7 pointsr/ProgrammingLanguages

I thought the extreme use of comic sans on this site was pretty amusing. :D

The Lambda Cube is pretty cool - http://www.rbjones.com/rbjpub/logic/cl/tlc001.htm - I'm not sure the page does the best job at explaining it though. The first time I saw a good explanation was in Type Theory and Formal Proof: An Introduction.

u/DonaldPShimoda · 8 pointsr/ProgrammingLanguages

I've peeked at this free online book a few times when implementing things. I think it's a pretty solid reference with more discussion of these sorts of things!

Another option is a "real" textbook.

My programming languages course in university followed Programming Languages: Application and Interpretation (which is available online for free). It's more theory-based, which I enjoyed more than compilers.

But the dragon book is the go-to reference on compilers that is slightly old but still good. Another option is this one, which is a bit more modern. The latter was used in my compilers course.

Outside of that, you can read papers! The older papers are actually pretty accessible because they're fairly fundamental. Modern papers in PL theory can be tricky because they build on so much other material.

u/ErrorIsNullError · 6 pointsr/ProgrammingLanguages

TPL is great for type theory stuff.

I'm working through Compiling with Continuations right now, and it's pretty good as a practical way to specify semantics that also has a history as useful in compilers. Matt Might's writeup gives a flavor.

u/sv0f · 3 pointsr/ProgrammingLanguages

All of your questions are pretty much the set of reasons why the Lisp family of languages was invented. Have a look at Common Lisp, Scheme, and the many programming languages books based on these languages. An "introductory" one is here, and advanced one here, and an even more advanced one here.

(Check whether there are newer editions of these. You'll probably want the newest ones so that you can easily type their code into a modern Scheme or Common Lisp implementation.)

u/samrjack · 2 pointsr/ProgrammingLanguages

I would say go with whatever your computer uses so that you can follow along (unless your computer uses something really obsucre).

As for books, I can only really recommend the places I learned X86 from which would be Hacking: the art of exploitation since it puts assembly the context you'll find it most often (looking through assembled code) so you learn many useful tools along the way. Also the textbook I had in college (you can find it cheaper if you look around) which covers many other topics too relating to computer memory and whatnot.

Though for just learning some basic assembly, look for some simple resources online. It's not too hard to learn generally speaking so you should be fine.

u/suhcoR · 3 pointsr/ProgrammingLanguages

Yes. Here are some papers about it if you're interested: https://web.archive.org/web/20050510122857/http://www.iis.sinica.edu.tw/~trc/languages.html They refer to earlier work which again refers to Lisp and a precursor of CLOS.

The Art of the Meta Object Protocol describes the MOP. If you're looking for a general book about CLOS then you could e.g. have a look at https://www.amazon.com/Object-Oriented-Programming-COMMON-LISP-Programmers/dp/0201175894.

u/timlee126 · 3 pointsr/ProgrammingLanguages

Thanks.

Are MOP and CLOS the same thing?

Now there are three books mentioned

u/ablakok · 3 pointsr/ProgrammingLanguages

Nice list. But how about DCPL, Design Concepts in Programming Languages?