Windows users can feel like second-class-citizens in the Ruby world. Ruby gems and tools often don’t work quite right on Windows hosts. But we can fix this. Making your code Windows-compatible isn’t as difficult as you might think, and in the process you’ll learn to understand Ruby better!
Welcome to the desert of the PC
The vast majority of Ruby software is developed on UNIX-like machines. Go to any Ruby conference, and you’ll see a sea of Apple MacBooks, with a sprinkling of PC laptops running Linux. Each one hosting code which is destined to run on a Linux or BSD server.
As a result of this near-monoculture, explicit Windows support tends to be lacking among Ruby libraries and tools. And ironing out windows compatibility issues is often seen as something of an occult art.
80% of the world population uses Windows as their desktop operating system. For anyone in this group who might want to get started with Ruby, the current state of affairs raises the bar to entry considerably. They have to either limit themselves to the subset of tools that have been coded with cross-platform compatibility in mind, or they have to acquire, install, and learn a completely new operating system.
Windows compatibility: not as hard as you think
Fortunately, writing Ruby code that is truly cross-platform isn’t all that difficult. You don’t have to learn the Windows programming API in order to do it. You don’t even need to learn a list of “workarounds” for “Windows bugs”.
In fact, all you have to do to write portable code in Ruby is to become aware of certain assumptions that you’re probably harboring as a result of developing code only UNIX-like operating systems. And the good news is, by becoming conscious of these assumptions, you’ll gain a better understanding of the tools you work with everyday. And you’ll be able to write code that’s portable by default to any platform.
Seeing is Believing
One of my most indispensable Ruby tools is the seeing_is_believing gem by Josh Cheek. Lately I’ve been sorely missing having it available when I’m writing code on my Windows box, so I decided to make it work on Windows. This turned out to be a sizable but highly instructive project that took me the better part of 2 days
The changes I ended up making turned out to hit upon most of the high points (and a few of the more obscure ones) of writing Ruby code that’s portable to Windows. So I thought that with this experience is still fresh in my mind, I’d use it to write a little intro to making your Ruby code Windows-compatible. (I’m also writing this in answer to some of the questions Josh asked when reviewing the pull request.)
Note that this guide is far from complete. And it probably contains some inaccuracies. But in my experience the issues I list here make up at least 80% of the Windows compatibility problems I encounter.
Note also that everything below applies to windows builds of Ruby. It probably isn’t accurate for code running under the new Ubuntu-on-Windows layer.
Get your file modes right
In Ruby, as in C and just about every other programming language on the planet, files can be opened in one of two modes: text, or binary.
The principle difference between text mode and binary mode has to do with whether line endings are translated or not.
Windows newline conventions
Let’s get straight into examples. I have a file, hello.txt
, that I wrote in Notepad. Let’s read it in:
File.read("hello.txt") # => "Hello, Ruby\n"
It contains the string “Hello, Ruby”, followed by a newline, represented as \n
.
Or does it?
On Windows, newlines in text are represented by the sequence Carriage Return+Line Feed (CR+LF).
Reading in binary mode
Let’s look at the actual byte-for-byte content of the file. We do this by reading it in binary mode.
File.read("hello.txt", mode: "rb") # => "Hello, Ruby\r\n"
This time, we can see the Ruby string escape representation of a CR+LF at the end: \r\n
.
Writing files in text and binary mode
Now let’s see what happens when we write to a file in the two different modes.
First, we’ll write a string to a file without any special mode specifiers.
File.write("output.txt", "Hello, Windows\n") File.read("output.txt", mode: "rb") # => "Hello, Windows\r\n"
When we read it back byte-for-byte using binary mode, we can see that Ruby translated our LF into CR+LF on disk (\r\n
).
But watch what happens when we write the file in binary mode:
File.write("output.txt", "Hello, Windows\n", mode: "wb") File.read("output.txt", mode: "rb") # => "Hello, Windows\n"
This time, the exact content of the disk file has just a linefeed (\n
), no carriage return. By switching on binary mode, we forced Ruby to skip the normal translation of newlines to CR+LF.
Newline translations in standard streams
Are files the only objects where we have to take line ending translation into account? Nope.
Here’s a little program that slurps up input from $stdin
and then echoes the internal string representation of that input.
input = $stdin.read puts input.inspect
Let’s pipe some text into this program.
> type hello.txt | ruby reader.rb "Hello, Ruby\n"
Remember, hello.txt
on disk contains a Windows-standard CR+LF line ending. But here we see that it has been translated into an internal LF.
Is it possible that Ruby’s standard input stream is also doing line-ending translation? Let’s ask it.
$stdin.binmode? # => false
The binmode?
predicate tells us whether an IO
object is in binary mode. If it isn’t, that means it’s in text mode. AKA line-ending-translating mode.
Changing an IO object’s mode
We can change this, though. Here’s a new version of our program that uses .binmode (without the ?
on the end) to switch $stdin
into binary mode before reading from it. (Note that this change is one-way: you can’t go back to text mode without re-opening a file.)
$stdin.binmode input = $stdin.read puts input.inspect
Now let’s execute our command-line using this new program.
> type hello.txt | ruby readerb.rb "Hello, Ruby\r\n"
This time, we can see the raw CR+LF (\r\n
) at the end of the text.
Lessons learned so far
What do we know so far?
- Windows text files (and streams) represent line endings with CR+LF. Whereas UNIX-like OSes use just LF.
- Reading from a file or stream in text mode auto-translates incoming CR+LF pairs to an internal LF. Writing internal LFs to a stream in text mode auto-translates the LFs back to CR+LF.
- The default mode is text mode.
- To skip any translation of characters, files must be opened in binary mode.
A little history
But why is this even necessary? Why do line endings need to be translated at all? Isn’t this an example of Windows needlessly over-complicating things?
Well, the history of line ending characters used by computers is long and complicated, and I’m not going to go deeply into it in this article. (If you’re a RubyTapas subscriber, I’m going to be going into much greater depth on this in some upcoming episodes)
Here’s the short version: while most computers had standardized on ASCII text encoding by the 1970s, there was never a standard or even a convention for how to represent newlines using the various teletype control codes ASCII provided. Some OSes used the Line Feed (LF) code; some used the Carriage Return (CR) code. Some CR+LF, some used LF+CR, and some used even more obscure characters.
The C heritage
Ruby is built on the C programming language, and like most modern languages its system calls and conventions are in a large part based on those of C. The creators of C knew that all of these different text line ending conventions existed out there in the world of operating systems. They knew they wanted to build a language where it was possible to write portable code that would work on any of those operating systems.
And so, they decided that C would have an standard, internal, logical representation of newlines. And that whenever text was read into, or written out of, a C program, those internal newlines would be translated from and to whatever the platform native newline convention happened to be, whether CR+LF, LF+CR, CR, or something else.
Of course, this meant that they had to choose an ASCII code that they would use for this standard internal newline representation. And they settled on the Line Feed (LF) code. They picked this one because C was developed on UNIX, and the UNIX native newline character was LF.
That meant that when C code was running on a UNIX host, the process of translating internal to external linefeeds would be a matter of replacing LF… with LF. Or in other words, doing nothing.
Your code is wrong
But here’s the part you must understand: when you open a file to read or write without explicitly specifying binary mode, Ruby is always performing text translation for you. Even when you’re running it on a UNIX-like OS.
It’s just that when you’re running on Linux or OSX, where the native newline character happens to be LF, the translation algorithm is: “do nothing”.
What does this mean? It means that a lot of the Ruby I/O code you’ve written on UNIX-like operating probably works by accident. By far the most common issue I see when porting Ruby programs to work on Windows is binary data being read or written in the default text mode. Yes, it’s “text mode” even on a UNIX-like OS.
How to fix it
Popular misconceptions to the contrary, text mode vs binary mode isn’t a “Windows thing”; it’s a C thing. Which makes it a Ruby thing.
It’s fairly easy to avoid problems with your data being mangled on non-UNIX hosts.
The basic rules—no matter what OS you are coding on—are:
- For all binary data—like images, executable files, binary object dumps, database internal formats, etc—use the binary flag (“b”) when opening files or streams.
- For reading and writing human-readable text, like HTML or YAML, to the local disk, use the default (text) mode.
- When in doubt, prefer binary mode. In the worst case scenario, most modern Windows programs have no problem with UNIX-style line endings. And binary mode ensures that there will be no line-ending translations to potentially break finicky data.
- Be consistent with reads and writes. If you write a file in binary mode, read it back in in binary mode.
Get your encodings right
As you probably know, there are many, many different encoding standards used by computers to represent text. But if you’re an English-speaking programmer who is used to coding on modern Mac OS X or Linux machines, you might not spend very much time thinking about what encodings you are using. And it’s not because those platforms support text encoding in a somehow “better” way. It’s that, just as with opening files in textmode instead of binary mode, on those your programs are probably just working by accident.
Linux standard encodings
What do I mean, “by accident”?
Let’s start by running some code on a Linux box. We’ll start by constructing a string, and asking it what its encoding is. Then we’ll ask Ruby what the system default encoding is.
puts "foo".encoding puts Encoding.default_external
When we run this, we get:
$ ruby encodings.rb utf-8 utf-8
It seems that they are both the same! This is convenient. This means that if we try to write a string to a file, Ruby will assume that we want to write UTF-8 data to a UTF-8 encoded file. That is: it won’t do any transcoding at all. The bytes in the string will be the bytes on disk.
Windows standard encodings
Now let’s try checking the same properties on Windows:
> ruby encodings.rb UTF-8 IBM437
The internal string encoding is still UTF-8. But the default external encoding is… IBM437.
What is IBM437? It’s the original PC DOS character encoding, and on Windows, in lieu of any hints about the actual character encoding of a stream or text file, it’s the default.
In case you’re wondering: yes, this is the standard encoding for the standard input stream as well.
$stdin.external_encoding # => #<Encoding:IBM437>
What kind of trouble can this cause? Well, let’s do something incredibly simple. We’ll write a string to a file, then read it back in. What could go wrong?
# encoding: utf-8 str = "Hello, encodings¡" puts "Original string encoding is: #{str.encoding}" puts "Writing: #{str} to output.txt" File.write("output.txt", str) puts "Reading text back from output.txt" str = File.read("output.txt") puts "The string contents is: #{str}" puts "The string's encoding is: #{str.encoding}"
Here’s the output:
>ruby readback2.rb Original string encoding is: UTF-8 Writing: Hello, encodings¡ to output.txt Reading text back from output.txt The string contents is: Hello, encodings┬í The string's encoding is: IBM437
That Unicode text isn’t looking so great at the end: "Hello, encodings┬í"
. That’s because, without any hints to the contrary from us, Ruby assumed we wanted to transcode that internal UTF-8 string into IBM437. And, well, IBM437 doesn’t have an encoding for “inverted exclamation mark”. As a result, something was lost in translation.
Use explicit external encodings
How can we make this work? We just need to be explicit about what encoding we want to use when writing the file to the disk, and reading it back.
File.write("output.txt", str, external_encoding: "utf-8") # ... str = File.read("output.txt", external_encoding: "utf-8")
This time around, the roundtrip is succesful:
>ruby readback2.rb Original string encoding is: UTF-8 Writing: Hello, encodings¡ to output.txt Reading text back from output.txt The string contents is: Hello, encodings¡ The string's encoding is: UTF-8
The other alternative, in this particular case at least, is to use binary mode to read and write the file.
File.write("output.txt", str, mode: "wb") # ... str = File.read("output.txt", mode: "rb")
In binary mode, no transcoding will be performed. This could be bad for interoperability with other programs. But if this program is the only writer or reader of the file, it ensures that exactly what goes out is exactly what comes in again.
What about streams we don’t open ourselves? If we knew we were going to receive UTF-8 data on $stdin
, we could use set_encoding to make sure Ruby doesn’t mis-transcode anything:
$stdin.set_encoding("utf-8")
Be encoding-aware
Once again, this isn’t really a “windows thing”. It’s a “computers in a diverse world thing”. If you care about running your code outside of a few operating systems which happen to default to UTF-8 encodings, you need to be encoding-aware. Especially when reading and writing files, think about what encoding you expect the files to be in, and tell Ruby about it.
Don’t embed Bash-specific shell commands
This is a simple one, but it’s surprising how many Ruby tools and libraries incorporate command-line invocations into their operation. Because the Windows commandline environment is so different from the UNIX shell, you have to be very careful about what commands you depend on, and what command-line syntax you use.
For instance, in the seeing_is_believing Rakefile, I found this code:
`which bundle` unless $?.success? sh 'gem', 'install', 'bundler' end
This uses the backticks to evaluate the UNIX which command. Unfortunately, this isn’t portable to Windows. Windows does have a similar where
command, but it works a little bit differently. In this case, I chose to simply replace the shell command with a pure-Ruby alternative.
You should also keep an eye out for UNIX-specific special files. For instance, Windows doesn’t have /dev/null
; it has NUL
instead.
Use pure Ruby
You may be surprised at just how easy it can be to replace UNIX shell commands with your-Ruby alternatives. For instance, Ruby’s FileUtils module exists to provide a cross-platform set of standard file manipulation commands. Even more “advanced” operations like creating file hard links are supported for both UNIX-like and Windows hosts.
Need to do fancy shell tricks like multi-program pipelines? You can set these commands up without an actual shell, by using the methods in the Open3 library.
No forks for you
Ruby’s Process
API is built on the POSIX set of UNIX system calls. As such, it exposes some subprocess features which are peculiar to UNIX-like systems. Features like the fork() call.
Now, I know some people might take offense at my calling fork()
“peculiar”. So I’ll just quote Dennis Ritchie:
…it is easy to see how some of the slightly unusual features of the [process control] design are present precisely because they represented small, easily-coded changes to what existed. A good example is the separation of the fork and exec functions. The most common model for the creation of new processes involves specifying a program for the process to execute; in Unix, a forked process continues to run the same program as its parent until it performs an explicit exec. The separation of the functions is certainly not unique to Unix, and in fact it was present in the Berkeley time-sharing system, which was well-known to Thompson. Still, it seems reasonable to suppose that it exists in Unix mainly because of the ease with which fork could be implemented without changing much else.
(Emphasis mine)
Windows, by contrast, uses what Ritchie refers to above as “the most common model”. You can use the Windows API to spawn a new process executing a given executable. But you can’t tell it to “fork” the current process into two identical processes.
Fork no
As such, Ruby’s fork() method is not implemented in Windows builds. And anything built on it won’t work.
The good news is, few if any of Ruby’s own APIs require fork
in order to work. You can still use backticks and system() and the various Process
module methods to your heart’s content.
Just don’t use fork()
. And, really, unless you are specifically setting out to write a process-forking web server, you shouldn’t need to build directly on fork()
anyway. Ruby’s Process and Open3 modules provide all of the higher-level process-spawning variations you are likely to need, while abstracting away low-level OS-specific details such as “fork-and-exec”.
Beware of platform-specific options
When you read the documentation for Ruby core libraries such as Process, you may come across the words: “Not available on all platforms”. This means (drumroll)… that it’s not available on all platforms. Be careful about using methods and method options that are marked with this proviso.
For instance: Ruby’s Process.spawn method can take a lot of different options. Some of them are universal. Some of them are platform-specific.
As we’ve already seen, the windows process API is very, very different from the UNIX one. And sometimes it’s not possible to achieve full feature parity across the two. Rather than restricting you to a common subset of functionality, Ruby opts to expose platform-specific features where they are available, and raise NotImplementedError where they are not.
Redirection woes
As a concrete example, this code starts an external program, and redirects its standard output and standard error streams to a set of pipes.
stdout_r, stdout_w = IO.pipe stderr_r, stderr_r = IO.pipe Process.spawn("myprogram", out: stdout_w, err: stderr_w)
This code works just fine on Windows—pipes and all.
But this code fails:
stdout_r, stdout_w = IO.pipe stderr_r, stderr_r = IO.pipe special_r, special_w = IO.pipe Process.spawn("myprogram", out: stdout_w, err: stderr_w, 4: special_w)
It’s trying to redirect an “extra” file descriptor, #4, into a special pipe. But this feature isn’t available in Ruby’s Windows Process.spawn implementation.
There are probably a few different viable options for tackling a missing advanced process-communication feature like that one. When I encountered this incompatibility in the seeing_is_believing codebase, I opted to simply replace the extra channel with a universally-supported TCP socket.
Use gems if necessary
This wasn’t the only process-related incompatibility I ran into while updating the seeing_is_believing
code. I also had to deal with some differences in how UNIX and Windows implement the concept of “process groups”.
In the end, I opted to stand on the shoulders of giants and use a gem that abstracts away some of the differences between the platforms. This is often a possibility when dealing with UNIX/Windows compatibility headaches. In particular, familiarize yourself with Daniel Berger’s excellent gems. He’s been working on the problem of making Windows APIs more accessible from Ruby for many years.
Conclusion
This has been a far-from-comprehensive whirlwind tour of some of the areas where you might run into problems making your Ruby code run on Windows hosts. I’m sure there are a lot of other tips we could go over, but the tips here should at least give you a heads-up about the kinds of problems to expect, and how to go about finding solutions.
I know it can be frustrating to have a program which works perfectly, only to have someone complain that it “doesn’t work on my PC”. But if you have access to a Windows machine, I encourage your to take the plunge and make your code truly platform-agnostic. You’ll be a better programmer for it, and you’ll be making the Ruby world a friendlier place to newbies.
I think you have a typo here:
stderr_r, stderr_r = IO.pipe
should be
stderr_r, stderr_w = IO.pipe
Also, here:
outside of a few operating systes which happen
should be ‘systems’
HTH
Thank you for writing this. I come from the world of having never written any code for Windows (or DOS), but now I’m teaching people who do use Windows machines. It has been frustrating to not be able to answer their questions, and to give them the same experience of developing in Ruby and in Rails without requiring them to jump through hoops. Let’s hope more of us nixers can write more universal Ruby.
I read an article some time ago that said most gems were not supported on Windows.
The developers of the gems take the time to ensure they work in UNIX environments, but they have their own lives to live and do not try to get the code working in other environments. Since most applications have dependencies on gems, why focus on Ruby on Windows?
Well, first off, most gems work just fine under windows. They may not be “supported” under windows, in the sense that their creator doesn’t have a windows machine to test them on. But that doesn’t mean they don’t work.
And as to why: almost ninety percent of the humans using desktop computers are using Windows. That’s an awful lot of people to discourage from learning Ruby by saying “you have to change operating systems first”.
That might have been my blog post (shameless self-plug: https://www.claudiuscoenen.de/2015/01/ruby-on-windows/ )
Avdi is right: Ruby itself is not the problem here. Also: _most_ gems work just fine. But every once in a while something wouldn’t work. And sometimes it only took raising an issue with the author, sometimes it took a tiny contribution. Sometimes, though, bugs remain open for years (not exxagerrating!). Sometimes maintainers aren’t even open to another CI which would prevent breaking changes in the future.
After writing that blog post, people attacked me with nonsense like “well then fix it” – as if I hadn’t tried. People mocked me with “_real_ devs don’t use windows”. Well, this one here does. I’m not going to start a pissing match who is the greater dev, based on operating system. I am certainly not switching operating systems just for one part in my tool chain.
I found all of this so tiresome that I almost completely left ruby. I still have a few client projects where I work as a ruby programmer. All my private projects are Node.js now. It’s not that I like JavaScript more (I _love_ ruby as a language), but node usually gives me less of a headache.
Thanks Avdi for your post. Actually, I am one of few silly Ruby programmers living in the Windows universe (the “dark side” of the Force). From my personal experience I can tell that the gems fail to run properly on Windows are a minority (my guess: less than 10 percent). Which isn’t too bad unless, but is indeed sometimes frustrating .
By the way for nixers there is a new CI for Windows that should integrate seamlessly with Github. It’s called AppVeyor. Here’s a post about it: https://mattbrictson.com/how-to-test-ruby-windows
Ah! Thank you for that. I’ve been hearing good things about AppVeyor.
I’ve been an active supporter of Ruby on Windows with gems like rake-compiler-dock and by many contributions to popular gems. Your post covers many of the typical issues, although there are some more unmentioned:
* Forward/backslash path separator: While in most cases Windows accepts both kinds of separators, in some cases only one of them can be used.
* Case insensitivity of filenames: When comparing filenames, applications can not simply compare the strings, but need to know, which characters are handled equally by the OS.
* Escape rules for command line arguments: Most programs accept MSVCRT argument escaping, but Ruby’s shellwords (from stdlib) is UNIX-only. It’s best to always use the array form of IO.popen, system, etc. only.
I agree that using UNIX specific commands from Ruby isn’t a good style and that not all method calls or all call variations can be supported on all platforms. But I definitely see it as a Microsoft fault, to not have fixed other issues like newline character and character encoding for so long. Why distinguish between text and binary files? Why is the console using such ancient and language dependent encodings? Why are so many developers annoyed by such useless differences?
Windows would be much more attractive to developers outside of the C# / Visual Studio world, if Microsoft could fix their bad historic decisions.
The text/binary dichotomy has nothing to do with Microsoft. It’s embedded in C, and C gets it from UNIX – which was built with the concept of an “internal” newline which would be translated to an “external” newline by device drivers.
It’s just that developing on OS X or Linux makes it really easy to mess up and fail to specify the right mode, because *on those platforms* you can make the mistake with no consequences. Because C and UNIX happen to share the same internal and external newline convention.
If you want to blame someone, blame the creators of C, who decided that the default file mode would be “text” (aka “newline translation mode”) instead of binary (aka “no translation mode”).
…and here’s the comment I should have started out with: thank you very much for that list of other gotchas; you’re right about all of those points.
Lars, I’ve seen a lot of your work over the years, so let me take this opportunity to say “thank you”.
Ah, yes, the fork instruction…
When three and a half decades ago I moved from a four year foray with Zilog, AMD and Intel microcomputers into a PDP-11 only shop, I discovered that there was an entire parallel universe in which DEC machines were the only computers which existed, and “fork” was the only way to create a thread.
On all other computers, a new thread was laboriously created by building a new environment block (let’s hope you build it right) then Executive Requesting the operating system asking it to please begin a new thread using that environment block.
The PDP series from DEC, however had a very cute MACHINE LANGUAGE (!) instruction, “fork”, which cloned the current environment, skipped the next machine code instruction, and began a new thread.
As Dennis Ritchie, who was doing his UNIX development on DEC machines said, “…it was easy to implement…”.
Thank you for this article, Avdi! I was just working on improving Windows compatibility for one of my gems today, and your article (along with the earlier comments) was a wealth of information.