Detecting slow plugins in Vim

Like most Vim/Neovim users I’ve built up a set of plugins and config files over the years that get the text editor customized for me. If you’re interested in my current setup you can see it in my dotfiles repo.

Since most of the time I’m adding a new plugin or setting individually, I can usually work out the culprit when Neovim starts running much slower. However, over the past few weeks I’ve had a particular issue with eruby files and there was no obvious cause.

I thought commenting out recent changes to my config, including new plugins, would be a good start. Eventually I got down to loading with no plugins and the default config, but still the issue persisted.

This is where I went searching for help and discovered the the built-in profiling tool. It is really simple to run:

:profile start profile.log
:profile file *
:profile func *
[[ Trigger the slow action here ]]
:profile pause

Inspecting your generated log file will show a list of the slowest functions, at the top of the list for me was the cause of my slow eruby files.

FUNCTIONS SORTED ON TOTAL TIME
count  total (s)   self (s)  function
    6  95.409081   0.015457  <SNR>14_LoadFTPlugin()
   80   0.321129   0.008854  airline#check_mode()
   16   0.305660   0.041821  airline#highlighter#highlight()
 1749   0.213133   0.108977  airline#highlighter#get_highlight()
  365   0.199579   0.019541  <SNR>82_exec_separator()
  819   0.191419   0.054809  airline#highlighter#exec()
   33   0.175094   0.002036  <SNR>112_NeoVimCallback()
   12   0.171814   0.001186  <SNR>108_ExitCallback()
   11   0.166302   0.001700  <SNR>107_HandleExit()
   11   0.155716   0.008071  ale#engine#HandleLoclist()
   12   0.108573   0.000808  airline#extensions#tabline#get()
   12   0.107765   0.002524  airline#extensions#tabline#buffers#get()
    6   0.101130   0.000250  ale#events#LintOnEnter()
    6   0.100799   0.000365  ale#Queue()
    6   0.099583   0.001838  <SNR>100_Lint()
    6   0.097989   0.003567  17()
  730   0.097386   0.010310  airline#themes#get_highlight()
   18   0.096341   0.017089  14()
    6   0.095190   0.002562  ale#engine#RunLinters()
 3498   0.088748             <SNR>82_get_syn()

95 seconds in one function!

93.687760 let &l:path = s:path . (s:path =~# ',$\|^$' ? '' : ',') . &l:path

Searching through the rest of the log even gave me the specific line of ftplugin/eruby.vim causing the issue. Unfortunately my vimscript knowledge is not up to fixing this, but commenting it out appears to be enough for now without any other side effects.

For more details on the profile feature there’s a detailed help doc at :help profile, it can be limited to specific files too, useful if you’re debugging your own config or a particular plugin.

Zero-based indexing and developer expectations

The array is one of the most common data structures in programming, in simple terms it is an indexed (ordered) collection of elements. They work differently depending on the language you’re writing, for example in C you define the types of all the elements up front, whereas in a dynamic language like JavaScript you can throw just about anything in there.

Because arrays are everywhere in software most developers quickly learn their behaviour and use them without much thought.

One common property of arrays across most languages is starting their indexing at zero, it can seem odd to new developers, but it’s just another behaviour you internalise and rarely think of again.

my_beatles_array = [“George”, “John”, “Paul”, “Ringo”]
my_beatles_array[0]
>> “George”

I recently needed a list of months to iterate over in Ruby, so I reached for the standard library and the MONTHNAMES array.

I did not read the documentation first as I just expected it to be a list of months, if I had read it I might have noticed the warning that it starts with nil.

The Ruby Date library designers are clearly thinking about use with indexing first, as opposed to the use of the whole list for iterating over. Padding the array with a nil makes each lookup match the month number.

Date::MONTHNAMES[1]
>> “January”

Date::MONTHNAMES[5]
>> “May”

This is a case where zero-indexing could seem confusing when mapped to a real-world concept, and I can see how you might want to design for the ergonomics of a month name lookup.

The designers of JavaScript took a different approach, call getMonth() on a Date object and you’ll get a zero-indexed month.

Since you’re not dealing with an array directly in the JavaScript case you might expect months being zero-indexed to be even less likely, this is a function on the Date prototype. The API design here was showing a clear preference for zero indexing, even if January being month zero makes little sense outside of software.

I’m not sure there’s a right answer here, reflecting the real world in software always involves trade-offs, you just have to be consistent if you want to keep your language users happy.

Galactic emulation

Preserving old media seems like a worthwhile task, assuming you think it has some cultural value.

Take a 35mm reel of film for example, you can “preserve” it by making copies and storing them in a safe environment. The film can be considered preserved as long as you have the print and the equipment to project or transfer it.

If your media starts out digital you need specific software, and in some cases hardware, to consider it preserved.

To play a digital film you need software that can read the file container format (AVI, MOV, MP4, etc.) as well as the video codec (MPEG-4 etc.). If no one bothers write a player for your legacy video format in the future your media remains trapped on old technology.

Preserving software

Things get a little trickier when the media you want to preserve, like a video game, was itself released as software. The released binary will have been built for particular hardware, and in many cases won’t run on newer hardware without some work.

A Nintendo 64 binary was compiled to run on the Nintendo 64, not your modern laptop or phone. If you had access to the original source code, or could cleverly reverse engineer your way to it, you could rebuild for new hardware and preserve the game that way.

However, in many cases you won’t have access to that original source code, leaving you with the options of recreating or emulating the original hardware that the binary ran on.

Emulation for classic games consoles has been around for decades, I’ve attempted to write emulators myself a couple of times, they make for interesting little hobby projects. If the hardware is relatively simple, most of the work in writing an emulator is in mapping the memory and hardware addresses of the original console.

An emulator recreates the behavior of the target hardware in software.

Emulation without the original software

I recently got back in to Star Wars Galaxies (SWG), an MMO released in 2003 that was shutdown in 2011, and this is where my inspiration for this post originates.

Star Wars Galaxies - UI
The very busy UI of Star Wars Galaxies, quite typical of MMOs of this era.

There is no reason to emulate SWG on modern PCs, thanks to the relative stability of Windows as a platform you can still run games well over 20 years old. However, a lot of the logic in an MMO lives on the server, that’s code you never have access to.

When you’re playing a multiplayer game in a persistent world you need a server that’s on 24/7 preserving the state of the world. The server has to decide where to spawn the AI creatures and characters of the world, the items they have to loot, the location and state of all the players, along with any other state that makes the world feel alive.

Since players never have access to this server code, any community that wants to preserve and continue playing a game after it is officially shut down have to reverse engineer it.

As a game client your interface to the server code is over the network, so most MMO server emulation starts with packet sniffing the game’s network activity before the official servers are closed. Your new server then needs to recreate this same interface. Although we’re not talking about hardware emulation is the same sense of the Nintendo 64 example, this is still emulation.

Reverse engineering a game’s network protocol is not easy, particularly if actions in the game do not give predicable, easily identifiable network traffic, as was apparently the case with the Matrix Online.

In the case of Star Wars Galaxies their are several projects running unofficial servers, some emulating from scratch, and others using parts of the original server source.

The legal issues

Unfortunately the legal status of these emulators is still not clear, which is a problem for organizations like museums that want to preserve games in their original working state.

WoW Classic Molten Core

The best these organizations might hope for is for game developers to open source their server code when they officially shut down their games. However, the recent success of WoW Classic might have game publishers thinking of potential reboots for their old games, and cashing in on any nostalgia could put them off supporting community-run servers.

But if the creators keep their games running, maybe server emulation won’t be so important.

Managing Haskell versions

The need for a tool to manage language versions can be a little confusing for new developers, but as soon as you’re working professionally with legacy projects the need to have multiple versions of the same language installed becomes clear.

Projects built years apart will have different dependencies, each expecting different language runtimes. Manually managing different interpreters and build tools in your path is painful, and just not practical.

As a Ruby user, my first exposure to language version managers was with RVM and rbenv. Eventually I moved to chruby, a lightweight manager written in shell script, and it worked great for me for several years. However, when you being to work with multiple languages, each with their own version managers, the appeal of a single, modular tool becomes clear.

asdf

asdf is that modular tool. I moved to it a couple of years ago for managing Ruby and Javascript versions, and would recommend it in most cases.

Until recently I had been using it for Haskell too, which was fine for my simple beginner projects. However, I have started to work on bigger projects with several dependencies, and to do this I’ve been using Stack build tool.

Haskell, Stack and Cabal

Most tutorials will tell Haskellers to start with just GHC, the Haskell compiler. I think part of this comes out of the slightly messy story it has around tooling. As well as Stack which pulls packages from a repository called Stackage, there’s an older packaging tool called Cabal. Cabal is used by Stack, but it has its own package repository, and its own share of issues around dependency management.

I’m still trying to understand the differences and best practice around packaging and managing dependencies.

This all recently led me to this error:

‘fail’ is not a (visible) method of class ‘Monad’

That line was highlighted in red, along with a few others, when I attempted to build a new Stack project with a lot of dependencies. It left me with a failing build, and Googling for answers offered few clues.

The error message suggested to pin the version of the “primitive” library, a dependency somewhere in the tree. I tried this and also tried to run the build with the allow_new flag set.

The issue came down to a detail I overlooked when installing Stack. Managing installed instances of GHC (the compiler) is one of Stack’s main features. So, when running stack build it was attempting to build with a particular version of Haskell (lts-12.26) as set in the “resolver” parameter, but asdf-haskell had filled my environment with references with a newer incompatible version.

The problem was solved by removing asdf-haskell and just installing Stack directly.

Even though Stack’s FAQ says it shouldn’t affect any other installations of Haskell, it obviously can’t account for other tools interfering with its own build process.

More options for building

I might have a clearer idea on managing installed versions of Haskell now, but I’m still trying to understand the best options for managing builds, there seem to be trade-offs with whatever you use.

I have noticed a few Haskell users prefer Nix, a purely functional package manager with its own language. I read through a few tutorials and might come back to it, but right now it doesn’t seem worth taking on along with learning Haskell.

Added to that there is also Shake, a make-like build too for Haskell, although it is not for managing dependencies itself it does have some crossover with Stack in features.

If you’re counting along that means we have Cabal, Stack, Nix, and if you include the built-in dependency tracking, GHC. So many options for tracking dependencies that I’m starting to miss the simplicity of Ruby gems.