I don't know who would care to read this post. Maybe econ grad students.
Aruoba and Fernández-Villaverde have a nice paper comparing programming languages for solving a standard DSGE model. They have provided a nice public good here, and the paper is worth a look for economists who code. The headline finding is that C++ and Fortran are still the fastest, and (somewhat surprisingly) C++ is slightly faster.
I originally used Matlab for my dissertation model. It was taking a long time to solve, and people in my department finally convinced me to switch to Fortran.* The most time-intensive parts of my solution algorithm take about one tenth of the time they took in Matlab. Other parts got even bigger speedups.
A lot of people give me a hard time about Fortran and tell me I should switch to Python or something similar. The reason I won't do that is clear enough from the paper. Python is, by all accounts, a very intuitive and versatile language. But my model can sometimes take 24 hours to solve, and even multiplying that by two or three times would be very costly. To calibrate (or estimate) a model, one must solve it many times. Also, other people in my department use Fortran (it's pretty popular in macro), so there are some nice agglomeration returns. Fortran is very common in scientific computing, so there is a large library of algorithms you can take off the shelf (see, e.g., Numerical Recipes). It's a really easy language to learn--in fact, it's fairly similar to Matlab.
A common critique of Fortran (voiced by the first commenter here) is that, these days, hardware is cheap and programmers are expensive--so easier, more versatile languages are best. That's probably true in much of industry, particularly things like web design. But for tasks that require serious number crunching, and in an academic world with limited resources, hardware is still a binding constraint (and grad student labor--i.e., mine--is cheap). I've been solving my model on 180 processors. A lot of people don't have access to that kind of hardware (until a few months ago, I couldn't use more than 18). Furthermore, there are diminishing returns to parallelization: above 180, I get basically no speedup from adding workers. So I'm not even sure that better hardware could offset Fortran's speed advantage in my case. (Right now, other people in my department are probably wishing I would quit using 180 processors...).
If you are doing representative agent models, the speed differences between languages are probably irrelevant. In that case, you probably care more about ease of use and applications other than the number crunching, like making charts. Fortran is pretty bad in this department--I dump all of my output into Matlab and make charts there, and I've been meaning to move those codes over to Python or R so I won't be so reliant on license stuff. But if you plan to only do those kinds of models, Fortran is probably not the right choice. Use Dynare, which is awesome.
If you are planning to solve models with some nontrivial heterogeneity, you need to choose your language carefully. In case you don't know: in a model in which agents differ over a state space, equilibrium prices don't just fall out of a first-order condition. You have to solve for them. The usual way is to guess prices, obtain policy functions, add up everyone's choices, check market clearing, and guess again. While a rep agent model only requires you to find policy functions once, a het agent model requires you to do it many times while you search for the right prices. (A nice side effect of solving models this way is that you get to see partial equilibrium results while it solves). Computing time grows exponentially with the number of heterogeneity dimensions you have, due to the Curse. Also, the more prices you have to find, the longer it will take (here's a tip: constant returns to scale technology makes factor prices move in lockstep, so knowing one implies the other). When I went from needing to find one price to needing to find two, it more than doubled my computation time.
This stuff matters because I think some of the most interesting work being done in macro right now is the empirical stuff based on micro data. To me, heterogeneity is what makes macro interesting. The theories that have to go with the rich micro data are often going to require hard computational work.
*I'll save commenters some time and simply note that I've already heard the one about how you used Fortran in college in the 1970s. It is
somewhat funny that this language is still in wide use in scientific
computing; but it's also not a huge surprise since doing floating point
calculations over and over again doesn't require the latest bells and
whistles. We're not trying to build Instagram here. Also modern Fortran is a pretty different language from Fortran 77 (it was last updated in 2008).