In this post, I'll explain how to securely configure NFS on Debian, to mount a directory from one server on another machine. A good use case for this is if you have a storage VPS with a large amount of storage, and want to use this space from other servers.

Security

NFS is unencrypted by default. It can be encrypted if you use Kerberos, but I wouldn't recommend going through the pain of configuring Kerberos unless you're setting up a corporate network with dozens of users.

Because of this, I would recommend never exposing an NFS server directly to the internet. I'd also advise against exposing it on "internal" networks which are not isolated per customer, such as what HostHatch provides. On isolated private networks (like what BuyVM provides), it's fine to use NFS unecrypted.

To secure NFS connections over the internet or other untrusted network, I'd recommend using WireGuard. There are various guides on how to configure WireGuard (like this one) so I won't go into it in too much detail. Note that WireGuard does not have the concept of a "client" and "server" like classic VPN solutions like OpenVPN. Each node is a "peer", and the overall topology is up to you. For example, you can have a "mesh" VPN network where every machine can directly access every other machine, without a central server.

On Debian 11 (Bullseye, testing) you can simply use apt install wireguard to get WireGuard. On Debian 10 (Buster), you'll have to enable buster-backports then do apt -t buster-backports install wireguard.

Generate a private and public key on each system:

wg genkey | tee privatekey | wg pubkey > publickey

Then configure /etc/wireguard/wg0.conf on each system. The [Interface] section should have the private key for that particular system. The NFS server should have a [Peer] section for each system that is allowed to access the NFS server, and all the other systems should have a [Peer] section for the NFS server. It should look something like this:

[Interface]
Address = 10.123.0.2
PrivateKey = 12345678912345678912345678912345678912345678
ListenPort = 51820

[Peer]
PublicKey = 987654321987654321987654321987654321987654321
AllowedIPs = 10.123.0.1/32
Endpoint = 198.51.100.1:51820

where 10.123.0.1 and 10.123.0.2 can be any IPs of your choosing, as long as they're in the same subnet and in one of the IP ranges reserved for local networks (10.x.x.x is usually a good choice).

Enable and start the WireGuard service on each machine:

systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0

Run wg to check that it's running. Make sure you can ping the NFS server from the other servers.

NFS Server

On the NFS server, install the nfs-kernel-server package:

apt install nfs-kernel-server

A best practice these days is to only enable NFSv4 unless you really need NFSv3. To only enable NFSv4, set the following variables in /etc/default/nfs-common:

NEED_STATD="no"
NEED_IDMAPD="yes"

And the following in /etc/default/nfs-kernel-server. Note that RPCNFSDOPTS is not present by default, and needs to be added.

RPCNFSDOPTS="-N 2 -N 3 -H 10.123.0.1"
RPCMOUNTDOPTS="--manage-gids -N 2 -N 3"

10.123.0.1 should be the IP address the NFS server will listen on (the WireGuard IP).

Additionally, rpcbind is not needed by NFSv4 but will be started as a prerequisite by nfs-server.service. This can be prevented by masking rpcbind.service and rpcbind.socket:

systemctl mask rpcbind.service
systemctl mask rpcbind.socket

Next, configure your NFS exports in /etc/exports. For example, this will export the /data/hello-world directory and only allow 10.123.0.2 to access it:

/data/hello-world 10.123.0.2(rw,sync,no_subtree_check)

Refer to the exports(5) man page for more details.

Finally, start the NFS server:

systemctl start nfs-server

NFS Client

On the NFS client, you need to install the nfs-common package:

apt install nfs-common

Now, you can use the mount command to mount the directory over NFS:

mkdir -p /mnt/data/
mount -t nfs4 -o vers=4.2,async 10.123.0.1:/data/hello-world /mnt/data/

Try write some files to /mnt/data, and it should work!

To automatically mount the directory on boot, modify /etc/fstab:

10.123.0.1:/data/hello-world /mnt/data nfs4 auto,vers=4.2

Optional: Caching

You can optionally cache data from the NFS server on the local disk by using a transparent read-through cache called CacheFS. The first time files are read via NFS, they will be cached locally. On subsequent reads, if the file has not been modified since the time it was cached, it will be read from the local cache rather than loading over the network. This can provide a significant performance benefit if the NFS server has slower disks and/or is physically distant from the clients.

To enable caching, first install cachefilesd:

apt install cachefilesd

Turn it on by editing /etc/default/cachefilesd, following the instructions in the file:

# You must uncomment the run=yes line below for cachefilesd to start.
# Before doing so, please read /usr/share/doc/cachefilesd/howto.txt.gz as
# extended user attributes need to be enabled on the cache filesystem.
RUN=yes

Modify your NFS mount in /etc/fstab to add the fsc (file system cache) attribute. For example:

10.123.0.1:/data/hello-world /mnt/data nfs4 auto,vers=4.2,fsc

Finally, start the service and remount your directory:

systemctl start cachefilesd
mount -o remount /mnt/data

To check that it's working, read some files from the mount, and you should see /var/cache/fscache/ growing in size:

du -sh /var/cache/fscache/
76K     /var/cache/fscache/

By default, the cache will keep filling up until the disk only has 7% space left. Once the disk drops below 7% free space. If the disk space drops below 3%, caching will be turned off entirely. You can change these thresholds by modifying /etc/cachefilesd.conf.

WireGuard is an exciting, new, extremely simple VPN system that uses state-of-the-art cryptography. Its Linux implementation runs in the kernel, which provides a significant performance boost compared to traditional userspace VPN implementations

The WireGuard kernel module is great, but sometimes you might not be able to install new kernel modules. One example scenario is on a VPS that uses OpenVZ or LXC. For these cases, we can use wireguard-go, a userspace implementation of WireGuard. This is the same implementation used on MacOS, Windows, and the WireGuard mobile apps. This implementation is slower than the kernel module, but still plenty fast.

This post focuses on Debian, however the instructions should mostly work on other Linux distros too.

Install WireGuard Tools

We need to install the WireGuard tools (wg-quick). On Debian, you can run this as root:

echo "deb http://deb.debian.org/debian/ unstable main" > /etc/apt/sources.list.d/unstable.list
printf 'Package: *\nPin: release a=unstable\nPin-Priority: 90\n' > /etc/apt/preferences.d/limit-unstable
apt update
apt install wireguard-tools --no-install-recommends

(see the WireGuard site for instructions if you're not on Debian)

Install Go

Unfortunately, since wireguard-go is not packaged for Debian, we need to compile it ourselves. To compile it, we first need to install the latest version of the Go programming language (currently version 1.13.4):

cd /tmp
wget https://dl.google.com/go/go1.13.4.linux-amd64.tar.gz
tar zvxf go1.13.4.linux-amd64.tar.gz
sudo mv go /opt/go1.13.4
sudo ln -s /opt/go1.13.4/bin/go /usr/local/bin/go

Now, running go version should show the version number.

Compile wireguard-go

Now that we've got Go, we can download and compile wireguard-go. Download the latest release version:

cd /usr/local/src
wget https://git.zx2c4.com/wireguard-go/snapshot/wireguard-go-0.0.20191012.tar.xz
tar xvf wireguard-go-0.0.20191012.tar.xz
cd wireguard-go-0.0.20191012

If you are on a system with limited RAM (such as a 256 MB or lower "LowEndSpirit" VPS), you will need to do a small tweak to the wireguard-go code to make it use less RAM. Open device/queueconstants_default.go and replace this:

MaxSegmentSize             = (1 << 16) - 1 // largest possible UDP datagram
PreallocatedBuffersPerPool = 0 // Disable and allow for infinite memory growth

With these values (taken from device/queueconstants_ios.go):

MaxSegmentSize             = 1700
PreallocatedBuffersPerPool = 1024

This will make it use a fixed amount of RAM (~20 MB max), rather than allowing memory usage to grow infinitely.

Now we can compile it:

make
# "Install" it
sudo cp wireguard-go /usr/local/bin

Running wireguard-go --version should work and show the version number.

If you have multiple VPSes that use the same OS version and architecture (eg. Debian 10, 64-bit), you can compile it on one of them and then just copy the wireguard-go binary to all the others.

Configuration

wg0.conf

You'll need to configure /etc/wireguard/wg0.conf to contain the configuration for your peer. This post won't go into significant detail about this; please refer to another general WireGuard guide (like this one) for more details. The basic jist is that you need to run:

wg genkey | tee privatekey | wg pubkey > publickey

to generate a public/private key pair for each peer, then configure the [Interface] with the private key for the peer, and a [Peer] section for each peer that can connect to it.

Your wg0.conf should end up looking something like:

[Interface]
Address = 10.123.0.2
PrivateKey = 12345678912345678912345678912345678912345678
ListenPort = 51820

[Peer]
PublicKey = 987654321987654321987654321987654321987654321
AllowedIPs = 10.123.0.1/32
Endpoint = 198.51.100.1:51820

systemd

We need to modify the systemd unit to pass the WG_I_PREFER_BUGGY_USERSPACE_TO_POLISHED_KMOD flag to wireguard-go, to allow it to run on Linux. Open /lib/systemd/system/wg-quick@.service, find:

Environment=WG_ENDPOINT_RESOLUTION_RETRIES=infinity

and add this line directly below:

Environment=WG_I_PREFER_BUGGY_USERSPACE_TO_POLISHED_KMOD=1

Finally, enable and start the systemd service:

systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0

Enabling the systemd service will connect the VPN on boot, and starting the systemd service will connect it right now.

You're Done

Now, everything should be working! You can check the status of wg-quick by running systemctl status wg-quick@wg0, which should return something like:

● wg-quick@wg0.service - WireGuard via wg-quick(8) for wg0
   Loaded: loaded (/lib/systemd/system/wg-quick@.service; enabled; vendor preset: enabled)
   Active: active (exited) since Mon 2019-07-01 06:30:30 UTC; 1 day 22h ago

Running wg will give you a list of all the peers, and some details about them:

interface: wg0
  public key: 987654321987654321987654321987654321987654321
  private key: (hidden)
  listening port: 38917

peer: 987654321987654321987654321987654321987654321
  endpoint: 198.51.100.1:51820
  allowed ips: 10.123.0.1/32
  latest handshake: 1 day, 22 hours, 59 minutes, 34 seconds ago
  transfer: 2.75 KiB received, 2.83 KiB sent

So I recently encountering a strange issue on two of my servers. I noticed that the load average was increasing approximately every 20 minutes:

Load average graph

I suspected a cronjob, but I don't have any cronjobs that run every 20 mins. Also, CPU usage doesn't actually increase during that period:

Low CPU usage graph

I did some digging and it took a long time to work out what was happening.

So What Is Load Average Anyway?

"Load average" is a term used to describe a measure of how "busy" a system is. Unix-like systems (including Linux) show a load average as three numbers, representing the system load over the previous one minute, five minutes, and fifteen minutes. These numbers represent the number of processes that are using the CPU right now, waiting to use the CPU, or waiting for disk I/O. The Wikipedia article has more details.

Linux updates the load average every 5 seconds. In fact, it actually updates every 5 seconds plus one "tick". The reason for this is to avoid coinciding with other five-second timers:

It turns out that there are a few other five-second timers in the kernel, and if the timers get in sync, the load-average can get artificially inflated by events that just happen to coincide. So just offset the load average calculation it by a timer tick.

From the Linux kernel code:

sched/loadavg.h:

#define LOAD_FREQ	(5*HZ+1) /* 5 sec intervals */

sched/loadavg.c

* The global load average is an exponentially decaying average of nr_running +
 * nr_uninterruptible.
 *
 * Once every LOAD_FREQ:
 *
 *   nr_active = 0;
 *   for_each_possible_cpu(cpu)
 *	nr_active += cpu_of(cpu)->nr_running + cpu_of(cpu)->nr_uninterruptible;
 *
 *   avenrun[n] = avenrun[0] * exp_n + nr_active * (1 - exp_n)

HZ is the kernel timer frequency, which is defined when compiling the kernel. On my system, it's 250:

% grep "CONFIG_HZ=" /boot/config-$(uname -r)
CONFIG_HZ=250

This means that every 5.004 seconds (5 + 1/250), Linux calculates the load average. It checks how many processes are actively running plus how many processes are in uninterruptable wait (eg. waiting for disk IO) states, and uses that to compute the load average, smoothing it exponentially over time.

Say you have a process that starts a bunch of subprocesses every second. For example, Netdata collecting data from some apps. Normally, the process will be very fast and won't overlap with the load average check, so everything is fine. However, every 1251 seconds (5.004 * 250), the load average update interval will be an exact multiple of one second (that is, 1251 is the least common multiple of 5.004 and 1). 1251 seconds is 20.85 minutes, which is exactly the interval I was seeing the load average increase. My educated guess here is that every 20.85 minutes, Linux is checking the load average at the exact time that several processes are being started and are in the queue to run.

I confirmed this by disabling netdata and manually watching the load average:

while true; do uptime; sleep 5; done

After 1.5 hours, I did not see any similar spikes. The spikes only occur when Netdata is running.

It turns out other people have hit similar issues in the past, albeit with different intervals. The following posts were extremely helpful:

In the end, I'm not sure if I'd call this a bug, but perhaps netdata could implement some jitter so that it doesn't perform checks every one second exactly. I posted a GitHub issue so their developers can take a look.

I use a Google Docs spreadsheet to manage all my domains. It contains a list of all the domain names I own, along with their expiry dates, the name of the registrar the domain is registered with, and some other details. I also wanted to also add a column showing the nameservers, so I could tell which domains were parked vs which domains I'm actively using.

Google Apps Script provides a URLFetchApp.fetch function to perform network requests. We can combine this with Google's DNS-over-HTTPS API to load DNS records for a given domain:

function GetDNSEntries(domain, type) {
  var response = UrlFetchApp.fetch('https://dns.google.com/resolve?name=' + domain + '&type=' + type);
  var data = JSON.parse(response);
  
  var results = data.Answer.map(function(answer) {
    // Remove trailing dot from answer
    return answer.data.replace(/\.$/, '');
  });
  return results.sort().join(', ');
}

We can then use this function in a spreadsheet:

=GetDNSEntries(A1, "NS")

This results in a column listing the DNS servers for each domain, with data that's always kept up-to-date by Google Docs:

Recently I was upgrading one of my projects from Visual Studio 2015 to Visual Studio 2017 (including converting from project.json and .xproj to .csproj), when I hit an error like this:

Microsoft.Common.CurrentVersion.targets(2867,5): error MSB3552: Resource file "**/*.resx" cannot be found.

It turns out this is caused by a long-standing MSBuild bug: Wildcard expansion is silently disabled when a wildcard includes a file over MAX_PATH. The Microsoft.NET.Sdk.DefaultItems.props file bundled with .NET Core includes a section that looks like this:

<EmbeddedResource 
  Include="**/*.resx" 
  Exclude="$(DefaultItemExcludes);$(DefaultExcludesInProjectFolder)"
  Condition=" '$(EnableDefaultEmbeddedResourceItems)' == 'true' "
/>

When MSBuild tries to expand the **/*.resx wildcard, it hits this bug, resulting in the wildcard not being expanded properly. Some other MSBuild task interprets the **/*.resx as a literal file name, and crashes and burns as a result.

In my case, my build server was running an old version of npm, which is known to create extremely long file paths. The way to "fix" this is by reducing the nesting of your folders. If you're using npm, upgrading to a newer version (or switching to Yarn) should fix the issue. Otherwise, you may need to move your project to a different directory, such as a directory in the root of C:\.

Job DSL is an excellent plugin for Jenkins, allowing you to configure your Jenkins jobs through code rather than through the Jenkins UI. This allows you to more easily track changes to your Jenkins jobs, and revert to old versions in case of any issues. As an example, for the Yarn project, we have a Jenkins job to publish a Chocolatey package whenever a new stable Yarn version is out. The configuration for a Jenkins job to do that might look something like this:

job('yarn-chocolatey') {
  displayName 'Yarn Chocolatey'
  description 'Publishes a Chocolatey package whenever Yarn is updated'
  label 'windows'
  scm {
    github 'yarnpkg/yarn', 'master'
  }
  triggers {
    urlTrigger {
      cron 'H/15 * * * *'
      url('https://yarnpkg.com/latest-version') {
        inspection 'change'
      }
    }
  }
  steps {
    powerShell '.\\scripts\\build-chocolatey.ps1 -Publish'
  }
  publishers {
    gitHubIssueNotifier {}
  }
}

This works well, but what if we want to use the exact same trigger for another project? Sure, we could copy and paste it, but that becomes unmaintainable pretty quickly. Instead, we can take advantage of the fact that Job DSL configuration files are Groovy scripts, and simply pull the shared configuration out into its own separate function:

def yarnStableVersionChange = {
  triggerContext -> triggerContext.with {
    urlTrigger {
      cron 'H/15 * * * *'
      url('https://yarnpkg.com/latest-version') {
        inspection 'change'
      }
    }
  }
}

Now we can call that function within the job definition, passing the delegate of the closure:

job('yarn-chocolatey') {
  ...
  triggers {
    yarnStableVersionChange delegate
  }
  ...
}

Now whenever we want to create a new job using the same trigger, we can simply reuse the yarnStableVersionChange function!

Recently I moved all my sites onto a new server. I use Duplicity and Backupninja to perform weekly backups of my server. While configuring backups on the new server, I kept encountering a strange error:

Error: gpg: using "D5673F3E" as default secret key for signing
Error: gpg: signing failed: Inappropriate ioctl for device
Error: gpg: [stdin]: sign+encrypt failed: Inappropriate ioctl for device

It turns out this error is due to changes in GnuPG 2.1, which only recently landed in Debian Testing. The error occurs because GnuPG 2.1 by default ignores passphrases passed in via environment variables or stdin, and is trying to show a pinentry prompt. "Inappropriate ioctl for device" is thrown because the Backupninja script is not running through a TTY, so there's no way to actually render the prompt.

To solve the problem, you need to enable loopback pinentry mode. Add this to ~/.gnupg/gpg.conf:

use-agent
pinentry-mode loopback

And add this to ~/.gnupg/gpg-agent.conf, creating the file if it doesn't already exist:

allow-loopback-pinentry

Then restart the agent with echo RELOADAGENT | gpg-connect-agent and you should be good to go!

Visual Studio 2015 was recently released, and with it came a newer beta of ASP.NET 5 (formerly referred to as "ASP.NET vNext"). ASP.NET 5 is a complete rewrite of ASP.NET, focusing on being lightweight, composible, and cross-platform. It also includes an alpha version of Entity Framework 7. However, EF7 is not yet production-ready and does not support all features of EF6. One feature that is missing from EF6 is support for other database providers - Only SQL Server and SQLite are supported at this time.

I wanted to transition a site over to ASP.NET 5, but needed to continue using MySQL as a data source. This meant getting Entity Framework 6 running on ASP.NET 5, which is pretty much undocumented right now. All the documentation and tutorials for EF6 heavily relies on configuration in Web.config, which no longer exists in ASP.NET 5. In this post I'll discuss the steps I needed to take to get it running. An example project containing all the code in this post can be found at https://github.com/Daniel15/EFExample.

Since EF6 does not support .NET Core, we need to remove .NET Core support (delete "dnxcore50": { } from project.json). Once that's done, install the EntityFramework and MySql.Data.Entity packages, and add references to System.Data and System.Configuration. For this post, I'll be using this basic model and DbContext, and assume you've already created your database in MySQL:

public class MyContext : DbContext
{
	public virtual DbSet<Post> Posts { get; set; }
}

public class Post
{
	public int Id { get; set; }
	public string Title { get; set; }
	public string Content { get; set; }
}

Entity Framework 6 relies on the provider and connection string being configured in Web.config. Since Web.config is no longer used with ASP.NET 5, we need to use code-based configuration to configure it instead. To do so, create a new class that inherits from DbConfiguration:

public class MyDbConfiguration : DbConfiguration
{
	public MyDbConfiguration()
	{
		// Attempt to register ADO.NET provider
		try {
			var dataSet = (DataSet)ConfigurationManager.GetSection("system.data");
			dataSet.Tables[0].Rows.Add(
				"MySQL Data Provider",
				".Net Framework Data Provider for MySQL",
				"MySql.Data.MySqlClient",
				typeof(MySqlClientFactory).AssemblyQualifiedName
			);
		}
		catch (ConstraintException)
		{
			// MySQL provider is already installed, just ignore the exception
		}

		// Register Entity Framework provider
		SetProviderServices("MySql.Data.MySqlClient", new MySqlProviderServices());
		SetDefaultConnectionFactory(new MySqlConnectionFactory());
	}
}

The first part of the configuration is a hack to register the ADO.NET provider at runtime, by dynamically adding a new configuration entry to the system.data section. The second part registers the Entity Framework provider. We also need to modify the configuration file to include the connection string. You can use any configuration provider supported by ASP.NET 5, I'm using config.json here because it's the default provider.

{
  "Data": {
    "DefaultConnection": {
      "ConnectionString": "Server=localhost; Database=test; Uid=vmdev; Pwd=password;"
    }
  }
}

Now that we have the configuration, we need to modify the context to use it:

[DbConfigurationType(typeof(MyDbConfiguration))]
public class MyContext : DbContext
{
	public MyContext(IConfiguration config)
		: base(config.Get("Data:DefaultConnection:ConnectionString"))
	{
	}
	// ...
}

An instance of IConfiguration will be automatically passed in by ASP.NET 5's dependency injection system. The final step is to register MyContext in the dependency injection container, which is done in your Startup.cs file:

public void ConfigureServices(IServiceCollection services)
{
	// ...
	services.AddScoped<MyContext>();
}

AddScoped specifies that one context should be created per request, and the context will automatically be disposed once the request ends. Now that all the configuration is complete, we can use MyContext like we normally would:

public class HomeController : Controller
{
    private readonly MyContext _context;

    public HomeController(MyContext context)
    {
	    _context = context;
    }

    public IActionResult Index()
    {
        return View(_context.Posts);
    }
}

Hope you find this useful!

Until next time,
— Daniel

In this post I'll cover the basics of using XHP along with the Laravel PHP framework, but most of the information is framework-agnostic and applies to other frameworks too.

What is XHP and Why Should I Use It?

XHP is a templating syntax originally developed by Facebook and currently in use for all their server-rendered frontend code. It adds an XML-like syntax into PHP itself. XHP comes bundled with HHVM, and is available as an extension for regular PHP 5 too.

The main advantages of XHP include:

  • Not just simple language transformations — Every element in XHP is a regular PHP class. This means you have the full power of PHP in your templates, including inheritence. More advanced XHP components can have methods that alter their behaviour
  • Typed parameters — You can specify that attributes need to be of a particular type and whether they are mandatory or optional. Most PHP templating languages are weakly-typed.
  • Safe by default — All variables are HTML escaped by default.

Installation

From here on, I'm assuming that you already have a basic Laravel app up and running on your machine. If not, please follow the Composer and Laravel quickstart guides before continuing.

If you are running PHP, you will first need to install the XHP extension. HHVM comes bundled with XHP so you don't need to worry about the extension if you're using HHVM.

This extension is only one part of XHP and only implements the parsing logic. The actual runtime behaviour of XHP elements is controlled by a few PHP files. These files implement the base XHP classes that your custom tags will extend, in addition to all the basic HTML tags. This means that you can totally customise the way XHP works on a project-by-project basis (although I'd strongly suggest sticking to the default behaviour so you don't introduce incompatibilities). You can install these files via Composer. Edit your composer.json file and add this to the "require" section:

"facebook/xhp": "dev-master"

While in composer.json, also add "app/views" to the autoload classmap section. This will tell Composer to handle autoloading your custom XHP classes. XHP elements are compiled down to regular PHP classes, and Composer's autoloader can handle loading them. In the end, your composer.json should look something like this. If you do not want to use the Composer autoloader (or it does not work for some reason), you can use a simple custom autoloader instead. I'd only suggest this if you have problems with Composer's autoloader.

Create Some Views

The first view file we'll create is the basic page layout. Save this as views/layout/base.php:

<?php
class :layout:base extends :x:element {
  attribute
    string title @required;

  public function render() {
    return
      <x:doctype>
        <html>
          <head>
            <title>{$this->getAttribute('title')}</title>
          </head>
          <body>
            {$this->getChildren()}
          </body>
        </html>
      </x:doctype>;
  }
}

(side note: if you are using HHVM, you can replace <?php with <?hh to use Hack instead of vanilla PHP)

This code introduces some core XHP concepts:

  • All XHP classes start with a colon (:), and colons are used to denote "namespaces" (note that these are not PHP namespaces). XHP classes can have multiple colons in the name (so :page:blog:comments is a valid class name)
  • :x:element is the base XHP class that all of your XHP templates should extend.
  • XHP classes can have attributes. This class has a title attribute that's required. If a required attribute is not specified, an exception will be thrown at runtime. Attributes can use intrinsic types (string, int, bool) as well as complex types (class names, eg. for view models or database models)
  • XHP classes have a render method that returns the XHP for rendering this component. This can be a mix of regular HTML tags (as shown here) and other XHP components.

Now that we have a layout file, let's also create a simple page that utilises it. Save this as views/page/home.php:

<?php
class :page:home extends :x:element {
  attribute
    string name @required;

  protected function render() {
    return
      <layout:base title="Hello Title">
        Hello {$this->getAttribute('name')}!
        <strong>This is a test</strong>
      </layout:base>;
  }
}

Notice that this component uses :layout:base in its render method, passing "Hello Title" as the title attribute. Generally, you should favour composition over inheritance (that is, use other components in your render method rather than extending them).

Since we are using Composer's autoloader to load the views, you need to tell it to rebuild its autoloader cache:

composer dump-autoload

This needs to be done every time you add a new view. If you are only editing an existing view, you do not need to do it.

Now that we have a page, let's use it. Using an XHP view from a Laravel route or controller simply involves returning it like you would any other response. In app/routes.php, modify the / route as follows:

Route::get('/', function() {
  return <page:home name="Daniel" />;
});

Save the file and hit your app in your favourite browser. If everything was successful, you should see "Hello Daniel! This is a test" on the screen. Congratulations! You've just created a simple XHP-powered Laravel site!

Next Steps

So where do you go from here? In general, every reusable component should have its own XHP class. For example, if you were using Bootstrap for your site, each Bootstrap component that you'd like to use belongs in its own XHP class. I'd suggest using at least three separate XHP namespaces:

  • :layout — Layout pages, the actual header and footer of the site. Different sections of your site may have different header/footers.
  • :page — Actual website pages
  • :ui — Reusable UI components

Within each of these namespaces, you can have "sub namespaces". For example, you may have :page:blog:... for blog pages

Further Reading

The Secret app for Android came out recently, I was intrigued so I thought I'd give it a go.

The best way to describe it would be if PostSecret were to create a social network. It consists of short-form posts in a format similar to Twitter, but eschews the traditional social networking concept of a profile and persona in favour of fully anonymous posts. You have a friend list and can see the number of friends (I've got 10 friends on it) but can not tell who the friends actually are. You can't even tell if two posts are by the same person since posts are not associated with a persona. It's such a simple idea that I'm surprised I haven't seen a major implementation of it before.

I feel it detracts from some of the social networking experience. Connecting to people emotionally at a personal level is a core concept of social networking and it's intentionally lost when everything is totally anonymous. A lot of the posts seem to be people complaining about their lives or asking for advice in scenarios that would probably benefit from a more personal mode of communication. I think communication is more than just words; it just feels a lot more natural and easier to connect to someone when they have an identity (even if it is pseudo-anonymous a la LiveJournal and similar sites that don't require real names). Communicating with an abstract entity has a more shallow feeling to it and there's less of a sense of connectedness.

Then again, that may be what some people like. Separating who you are from what you say is an interesting concept in that people are probably more likely to be open if fully anonymous as there's no fear of being judged by people they know. Communication becomes a more abstract concept where there's some sense of belonging without a sense of connection at a personal level. That and you can say things that wouldn't be publicly acceptable.

Have you tried it? What do you think of it?

Previous 1 2 3 4 5 6 7 8 9