This is a tangential post to everything about .NET Core.
For the last year, I've been running Linux on my development PC, mainly because I've been hacking around with Docker and my PC isn't really up to running loads of containers in a VirtualBox VM using Docker Toolbox. When you start using Linux, you might go for an Ubuntu-based distro like Mint or Elementary, but you quickly get frustrated by the age of the Ubuntu 14.04 LTS underneath. The cool kids all use Arch, but that's non-trivial to install, so I've ended up using Manjaro, which is based on Arch but is easier to install and a little more cautious with updates, so more stability.
What's the point of all that? Well, the teams at Microsoft working on the Linux story for .NET Core are all using Ubuntu 14.04 or some derivative thereof. So everything works on Ubuntu, but won't necessarily be available, or even build, on other variants of Linux. I've tried building coreclr and corert on my Manjaro PC, and it doesn't work yet. It will, eventually, hopefully through contributions from the community, at which point I expect to be able to
yaourt dotnet and have it install from the Arch AUR repositories.
In the meantime, it's Docker to the rescue, again.
See, with Docker, I can create an image based on Ubuntu 14.04, install the
dotnet-nightly package from Microsoft's APT repository, and run
dotnet inside that container against files and directories on my host disk.
I spent a couple of hours last night fiddling with this and getting it working, so I want to share the results with anyone who might be in the same situation as me.
Here's the Dockerfile:
FROM ubuntu:trusty # Install dotnet-nightly RUN sh -c 'echo "deb [arch=amd64] http://apt-mo.trafficmanager.net/repos/dotnet/ trusty main" > /etc/apt/sources.list.d/dotnetdev.list' \ && apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893 \ && apt-get update \ && apt-get install -y dotnet-nightly \ && apt-get clean && rm -rf /var/lib/apt/lists/* COPY dotnetproxy /usr/local/bin/ ENTRYPOINT ["dotnetproxy"]
dotnetproxy script is there because when you run commands inside the Docker container, they run as
root, so all the files and directories that get created on the host are owned by
root and, for some reason, have
rwx------ permissions. All the script does is
chown any files or directories that have been created to the main user on the host machine (assuming the main user's UID is 1000, which it usually is). The script runs in the container; it's not meant to be run on your actual PC.
The last part is the
docknet script, which is designed to work just like the
dotnet CLI command. You can
docknet compile --native, or any command you can pass to
dotnet (I haven't tested all of them). For me, at least, the binary created using
docknet compile --native runs just fine on my Manjaro PC.
I created the script because the Docker command to run the
dotnet inside the container is really, really long, so I wrapped it in a little shell script which you can copy somewhere on your path (if you copy and paste the text yourself, make sure you
chmod +x docknet).
What the script does is mount the
.local/share/NuGet folders from your home directory to the root home directory in the container, so when you run
docknet restore the files go onto the host. It also mounts your current directory, using the last name in the path as the volume name in the container. So if you're working in
/home/mark/stuff/bibble, the volume will be mounted as
/bibble in the container, and the working directory set to that.
Why bother with this naming kerfuffle? Well, when you run
dotnet build or
dotnet compile, it uses the directory name as the name of the output files. My first working test always mounted your current directory as
/app, which meant the output files were all called
app.dll and so on.
Assuming you've got Docker, you just need to
docker pull rendlelabs/docknet to get the image, and grab the
docknet script from the GitHub repo at rendlelabs/docknet-docker.
There's an automated build set up on the Docker Hub, and it's triggered at 3am every day by a Zapier scheduled task and webhook, so at the start of every day the image will have the latest