Leading typename, “dot template,” and why they are necessary.

Anyone dealing with templates will eventually run into something like this

// templated class with a type alias for "type"
template <typename T>
struct Obj {
    using type = int;
};

template <typename T>
void f() {
    Obj<T>::type var; // should be "int var;"
}

But wait, there’s an error on the only statement in f()

# clang-3.6
error: expected ';' after expression
    Obj<T>::type var;
       ^
       ;

# gcc-5.1
error: need ‘typename’ before ‘Obj<T>::type’ because ‘Obj<T>’ is a dependent scope
     Obj<T>::type var; // should be "int var;"

gcc’s error is great in this case, telling us we need to add a leading typename to make it:

typename Obj<T>::type var;

The leading typename tells the compiler that Obj<T>::type is a type. But why do we need this? We know that Obj<T> is a dependent type because it depends on T.

First, you have to realize that without a leading typename, the compiler assumes (in the case of a dependent type) that it’s a static member variable, meaning the following is parsed correctly

template <typename T>
struct Obj2 {
    static constexpr int type = 5;
};

template <typename T>
void f2() {
    int n = Obj2<T>::type * 3;
}

So what? Why can’t the compiler just look inside of Obj and see that it has a type member. Well, in the case of template specializations, it can’t. Consider this:

// general case has "using type = int"
template <typename T>
struct Obj {
    using type = int;
};

// specialization for int has "int type = 2"
template <>
struct Obj<int> {
    static constexpr int type = 2;
};

Inside of f() we don’t know which one of these we’re dealing with. f() means it’s using the general template, but f() uses the specialization. The programmer must clarify that they expect a type.

You might be thinking “right, but in this case it’s clear that the body of f() is declaring a variable.” Right you are, but it’s not always this clear. What if instead we had

int x = 5; // global variable x
template <typename T>
void f() {
    Obj<T>::type * x;
}

Now we have two possible versions of this

int * x; // declaring a local variable x of type int*
2 * x; // multiplication expression

Both of these lines make sense, but they mean very different things. It’s not always clear which way it should be parsed.

“Dot Template”

You’re less likely to see this problem, but I think it’s cool and has the same idea behind it. When your dependent name has a templated member function, the compiler gets confused about what you mean again:

struct Obj {
    template <typename U>
    void m();
};

Now let’s create a variable with dependent type Obj<T> and call m()

template <typename T>
void f() {
    Obj<T> v;
    v.m<int>();
}

But hold on errors! this time clang being more helpful

# gcc-5.1
error: expected primary-expression before ‘int’
     v.m<int>();

# clang-3.6
error: use 'template' keyword to treat 'm' as a dependent template name
    v.m<int>();
      ^
      template 

Huh? template keyword? What’s happening is the compiler doesn’t know that m is a templated member function, we need tell it

v.template m<int>();

Alright so what’s the alternative here. Well, it’s a pretty well known issue with the C++ grammar that without context, you don’t know what certain lines mean. Borrowed from the D docs, consider the line:

A<B,C>D;

This could mean two things, (1) A is a templated class, B and C are template arguments, and D is a variable declared with that type. for example:

template <typename T, typename U> struct A;
using A = int;
using B = int;

A<B,C> D; // A<int,int> D

Or (2), A, B, C, and D are all variables and this is two comparisons separated by a comma

int A{}, B{}, C{}, D{};
(A < B), (C > D);

So, by default the compiler is assuming that m is not a template, which means that it assumes the angle brackets are less than and greater than operators, so it’s parsed as

((v.m) < int) > ();

This could make sense in a different context

template <typename T>
struct Obj {
    int m;
};

And then the following:

int a{};
v.m<a>0;
((v.m < a) > 0

So the “dot template” is needed to say that what follows is a template. Like I said this is far less common, and the syntax is so awkward that it can drive design decisions. If you’ve ever wondered why std::get is a free function, a big part is that std::tuple is often used as a dependent type.

template <typename... Ts>
void f(std::tuple<Ts...>& t) {
    std::get<0>(t); // nice
    t.template get<0>(); // gross
}
Advertisements

Canon Pixma MG5420 on Linux Mint

Another round with this new platform. My mom uses a desktop computer with a Canon Pixma printer/scanner, and I wanted to get this to work. I found a thread on askubuntu which referred me to the cannon europe site. There I downloaded the debian package listed as MG5400 series IJ Printer Driver Ver. 3.80 for Linux (debian Packagearchive). I untar’d and cd’d into the directory, but when attempting to run ./install.sh, I got the error:

dpkg: dependency problems prevent configuration of cnijfilter-mg5400series:
 cnijfilter-mg5400series depends on libtiff4; however:
  Package libtiff4 is not installed.

This is a problem with the old canon drivers looking for an old libtiff. Mint has libtiff5, but no libtiff4 in its repositories. I was able to grab a libtiff4 deb archive from the debian repos. I’m using an amd64 processor as most people so in my case I needed libtiff4_3.9.6-11_amd64.deb. After getting that and running sudo dpkg -i ../libtiff4_3.9.6-11_amd64.deb I was able to continue by running the canon ./install.sh normally.

Getting the scanner to work was a little surprising. It doesn’t work with xsane or simple-scan, but when I got MG5400 series ScanGear MP Ver. 2.00 for Linux (debian Packagearchive) from that same canon-europe site, it worked. Again, I got the tarball, unpacked it, ran the ./install.sh (this time without a problem). From there I had to run the command scangearmp in the shell, which opened up the program and let me start scanning images. As I said, I wanted my mom to be able to use this, so I created a desktop launcher to start it (and used /usr/share/icons/Mint-X/devices/48/scanner.png for the icon).

Mint and Cinnamon has my stamp of approval

I was looking for a distro to put on my girlfriend’s parents desktop. They’re not very tech-savy and have had their fair share of malware and virus issues using windows for years. My first choice was Ubuntu since it’s what I’ve used in situations like this in the past. However, their computer was a fairly old eMachines that Ubuntu couldn’t live boot with for some reason. Now, I’m sure I could’ve gotten Ubuntu to install, but given the users of the computer, I really wanted something that I was sure would work with minimal modifications for a long time. So, I took a shot and made a Linux Mint 17.1 (Rebecca) disc.

Linux Mint booted with no complaints and installed painlessly. Everyone found Cinnamon intuitive, simple, and easy. However, Firefox was freezing, not on the live boot, only after the install. I looked around and found this forum which said to switch to gdm. This worked.

The package manager and software center was also really straight forward. Since my goal was for this to relieve the endless headaches of windows without too much of a learning curve, I was happy. I installed it as a dual boot on my dad’s lenovo laptop he mostly uses for web browsing and it’s running smoothly there too.

When I need to install an easy-to-use linux distro, I’ll be reaching for Linux Mint pending some disaster.

P.S. Unity confuses new users too much to be worth their trouble most of the time.

Thoughts on TAs biggest mistakes as a TA and as a Student.

I’m by no means an education expert and not the greatest TA, but I try to be pragmatic in how I approach students, grading, everything. I’ve noticed there are some pretty common faults that I’ve really hated and many TAs don’t seem to care about. A lot of this may appear to be just about grading, but the grading aspects hint at deeper issues. I’ve never seen a good TA who was bad at grading.

-5 for everything

It seems that there’s a huge issue with TAs not realizing that they can take off points in anything other than 5 point increments. You forgot to capitalize your last name in the submission file? -5. Your answer is 1000 words when I asked for a sentence? -5. You missed one specific detail on a test question but were otherwise right? -5. Your presentation consisted of you reading words off of your slides for 10 minutes? -5. You turned in your homework assignment 2 days late? eh, -5.

The motivation for the minor points is this: if the submissions instructions for an assignment say to submit “lastname_firstname_proj1.txt”, and 5 kids submit “proj1.txt”, another submits “Untitled.txt”, and the last gives me “smith_john.pdf”, there’s a bunch of stuff that I need to address here. It’s frustrating. I hate having to take off points for stupid submission mistakes but every TA you talk to will tell you “If you tell them they messed up, but don’t take off points, they’ll just do it again.” Trust me, we all wish we could just inform students of their mistakes on these things and that be the end of it, but 10/10 it’ll happen again unless there’s some point loss.

What I’ve found, however, is that it doesn’t matter too much how many points are taken off, as long as it’s not 0. In this regard, -5, -10, -1, are all equally effective. You might not expect it, but students do respond to -1. What I usually do is take off -1 for each error unrelated to the actual body of the project. -1 per error allows the losses to have little-to-no effect on the students actual grade, but it also lets me be fair. If one student gives me “smith_proj1.txt” and the other gives me “proj1.pdf” clearly these are not both -5 violations. The first student might get -1 for missing their first name (a fair deduction imo), and the second could lose -3 for no firstname, no lastname, and the wrong file type.

Not talking

I would think this should be obvious, but I’ve had TAs who are silent in lab sections. There’s a less-common problem with some international students whose spoken English skills are shaky, where they seem very hesitant to speak to students at all for fear of sounding bad. I understand where they’re coming from (je parle français comme une vache espagnol), but I don’t know how to address the problem other than asking, please try.

(Nearly) all or nothing grading

This isn’t as common, but I see a lot of TAs who don’t actually understand what partial credit means. If there’s a 20 point question, I know TAs who will automatically subtract 15 for it not being exactly the correct answer. Somehow they think this is much better than grading without partial credit, but when people get in this habit, scores are either complete, partially complete, 0. This sucks.

If you have 20 points to work with, use the full range. A lot of it has to be relative.

“Does everyone understand this?” / “Does anyone have any questions”

This is the hardest thing to get around, and honestly, I commonly can’t get around this problem myself. If you say anything like this to a class, 95% of the students who would have questions won’t ask them. The only time I will flat out ask students if they have any questions is if I’m teaching a new concept for the first time, and I want to get feedback early (I aim for at least every 5 minutes). Like I said, it’s hard to get around this problem, but here’s what I’ve found works really well instead:

Let’s say I’m conducting an exam review session and the students have had some sample problems to go over (last semester’s exam). If I start by saying “does anyone want me to go over problem 1?” or “does anyone have any questions on problem 1?” No one will say a word. I know because I’ve done this without thinking. If I say instead (or even right after) “Should I go over problem 1?” then EUREKA I’ve got half the class saying “yes!” and nodding. Something about me forfeiting control, asking the class for direction rather than their requests, gets them talking. Once students start talking they’re more likely to ask more solid questions. Even if people don’t speak, you can look around the room after asking a question like this and see them nodding or shaking their heads.

You can’t ask students to demonstrate what they don’t know.

Another, admittedly less effective means, is to invert the question. What I mean is that instead of asking “who doesn’t understand?” say “raise your hand if you understand this.” Now, this requires care because if you overuse this technique everyone will just raise their hand right away for everything. Even before that habit forms, they’ll raise their hands without really understanding. Hope is not lost though. You have to watch carefully after you say this, see how quickly their hands go up. Look for the students raising their hands slowly with their eyes wincing because they feel like they’re lying to blend in. You won’t get any questions out of this, but you can get a feel for how well the class is following and whether you need to go over something in more depth.

IMO, getting students to ask questions in a classroom is one of the most impressive skills a teacher can have. It’s really really hard.

Assigning grades the first time through

All of my grading is relative in the sense that I don’t start with a rubric. I always get through at least 50% of the exams/programs/homeworks/whatever before I get an idea of what point values certain things should be worth. If a lot of people have the same error, it might be a big mistake in my eyes, but it hints that there’s a greater misunderstanding in the class that needs addressing (unless it’s because of cheating of course).

Another problem is that I might see one submission and say “whoa this is definitely no higher than a 50,” but then later see one that’s better, but still isn’t even half-way complete. It’s really difficult to assign grades without having the class in general in mind.

Regrading, disregarding other students for the one in the room

All too often, students go to a TA to complain about their grades and are given points back when they don’t deserve a higher grade. Grading is imperfect, often a TA will accidentally deduct a point or two more than they should, it’s just the nature of the volume of things being dealt with and human error. Students will go through the points they lost with a fine-toothed comb and will be enraged easily when they feel deductions are unjust. I get it, and I have given points back (rarely) for a genuine screw-up on my part. I’m not too proud to admit I made a grading error.

Giving students points back to get them to stop complaining is a wide-spread problem. The people who suffer from this are the students who don’t complain about their grades. If half of the class complains to a crappy TA, and the TA gives them all 10+ points back, then suddenly the curve has jumped. The students who accepted responsibility for their mistakes now have lower grades and that’s not fair.

The problem is that there are other parts where I may have missed a deduction of a point or two somewhere else in the assignment or test. I’ll get students who aren’t emailing me over something obvious, but asking about a specific deduction in their work. I’ve always given students the option “if you’d like, I can regrade the whole thing, but that means the grade can go up or down, because I need to look at everything more closely.” I know others who do the same, students rarely take anyone up on this option. Grades typically go slightly lower in these cases if they change at all. It’s not a vengeful or prideful thing, but if the student wants their grade to be a 100% accurate reflection of their work, it’s gonna need a closer look. Closer looks generally reveal more mistakes.

Google style guide makes two major changes

First off, they removed the silly rule about not having copy constructors and preferring a .Clone() member function. Praise the lawd~

Objects of copyable and movable types can be passed and returned by value, which makes APIs simpler, safer, and more general.

Second, they lifted the ban on rvalue references. One may now use rvalue refs to make movable types. They’re still banning std::forward for the time being though.

Progress!

rlwrap on solaris

Trying to use sqlplus on solaris was proving to a pain do to the lack of command history. I found an oracle blog that described using rlwrap to work around this. A few other sites supported this, so I grabbed the source for rlwrap-0.41 and gave it a shot, a simple ./configure && make. When running I received an error.

% rlwrap sqlplus
Warning: rlwrap cannot determine terminal mode of sqlplus
(because: Invalid argument).
Readline mode will always be on (as if -a option was set);
passwords etc. *will* be echoed and saved in history list!

rlwrap: error: TIOCSWINSZ failed on no pty: Bad file number

After googling around I found this github issue on a repo which has since fixed the problem. The issue is in pty.c So, I cloned the source, cd’d in and did a ./configure. However this repo didn’t have a configure. Granted it works with autoreconf, I’m working on a limited system where I have very little power. Rather than descend this building-everything-from-source dive any further, I replaced the pty.c in my existing version with the pty.c from the github repo and it seems to work fine.

Hacky? yes, but I’m not planning on spending a ton of time on this system.

wordpress unescaping code blocks

It seems that if I create a code block like the following,

template <typename T>
void f(T a, T b) {
  std::string s = "result: ";
  std::cout << s << a + b << '\n';
}

then publish, edit, publish, it escapes and reescapes the html in it after the edit. This is quite hideous.

template &lt;typename T&gt;
void f(T a, T b) {
  std::string s = &quot;result: &quot;;
  std::cout &lt;&lt; s &lt;&lt; a + b &lt;&lt; '\n';
}

and it stacks, each time I edit, it gets worse

template &amp;lt;typename T&amp;gt;
void f(T a, T b) {
  std::string s = &amp;quot;result: &amp;quot;;
  std::cout &amp;lt;&amp;lt; s &amp;lt;&amp;lt; a + b &amp;lt;&amp;lt; '\n';
}

Pretty rough. I’ll have to find a way around this.

Exploding Tuple in C++14

So as it turns out, the C++14 standard makes expanding a tuple, pair, or array, in a function call, very simple.

template <typename Func, typename TupleType, 
          std::size_t... Is>
decltype(auto) call_with_tuple_impl(
    Func&& f, TupleType&& tup, 
    std::index_sequence<Is...>) {
  return f(std::get<Is>(tup)...);
}
 
template <typename Func, typename TupleType&gt;
decltype(auto) call_with_tuple(
    Func&& f, TupleType&& tup) {
  constexpr auto TUP_SIZE = std::tuple_size<
    std::decay_t<TupleType>>::value;
  return call_with_tuple_impl(
      std::forward<Func>(f),
      std::forward<TupleType>(tup),
      std::make_index_sequence<TUP_SIZE>{});
}

Much less painful than its C++11 equivalent. std::integer_sequence is responsible for most of the ease. Combined with auto functions, which deduce their own return type, that is.

call_with_tuple first determines the tuple_size of the tuple-like object passed in. std::make_index_sequence will result in an index sequence of 0, 1, ..., TUP_SIZE. The last argument of call_with_tuple_impl is used to deduce the Is... pack. finally the mf(std::get<Is>(tup)...) is the equivalent of mf(std::get<0>(tup), std::get<1>(tup), ..., std::get<TUP_SIZE-1>(tup)).

The std::decay_t is necessary because TupType is a universal reference.

I’m using these integer sequences more and more. They’re proving to be extremely useful in flattening complex recursive data structures and logic. C++14 rules

How to malloc() the Right Way

Having been in the game for a while now, I’ve seen different styles of malloc()ing data. With new students, I almost universally see:

int *array = (int *)malloc(sizeof(int) * N);

There are two problems that need addressing here:

  1. The cast to (int *)
  2. Using sizeof(int)

If you like either of these, bear with me, I have reasons for changing both to the below:

int *array = malloc(N * sizeof *array);

If you aren’t already aware, the sizeof operator can be applied to an expression. The expression is not evaluated (array isn’t actually dereferenced in the above), only the type of the expression is examined. The type of *array is int, so sizeof *array is equal to sizeof(int), and is also computed at compile-time.

Why you shouldn’t cast

First of all, you don’t need to. All data pointer types in C are implicitly convertible to and from void * with respect to constness. This should be enough of a reason not to use it, but keep reading if you’re unconvinced.

Second, and this is vital for students, but especially teachers: code should have as few casts as possible. A cast is a sign that something dangerous or strange it happening. A cast indicates that the normal rules of the type system can’t do what the programmer needs to do. I haven’t seen many good reasons for type-casting outside of low-level systems code. Teachers, we should not teach students to use casts without a second thought. The problem becomes more pronounced with they use a cast somewhere to shut up the compiler, their code breaks, and no one knows why. malloc() is not unusual, strange, or dangerous in C. It shouldn’t be casted, it’s a natural part of the flow of any significant C program.

Third, the cast can actually prevent the compiler from issuing a warning if stdlib.h isn’t included. C89 doesn’t require a declaration for all functions, as long as it can find them at link-time. Simply put, this means you can call malloc without including stdlib.h. The compiler will produce an implicit declaration: int malloc(int). This is invalid in C99, but most compilers still allow it.

Why you shouldn’t use sizeof(type) either

If the type changes, the malloc line won’t need many changes. Let’s examine the first example where one allocates an array of int. Now, imagine the programmer later realizes the type needs to be long, not int.

int *array = (int *)malloc(sizeof(int) * N); /* before */
long *array = (long *)malloc(sizeof(long) *N); /* after */

Three changes. In the latter case we have

int *array = malloc(N * sizeof *array); /* before */
long *array = malloc(N * sizeof *array); /* after */

One change. No changes to the right side of the =, which is important because often the call to malloc() isn’t right next to the variables declaration. This makes it clearer that using sizeof(int) and casting prevent your code from being very DRY. For all that is said about DRY code, the easiest way to think about it, in my opinion, is that you want to design your program so that if you need to make a change, you only need to change one thing. Repeating the type in three places makes your code harder to modify and maintain.

A more convincing example

Let’s consider a typical type of struct, a size and a pointer to the contained data. Along with that struct I’ll create a function

myarray.h

#ifndef MY_ARRAY_H_
#define MY_ARRAY_H_

#include <stddef.h>
/* Array type, stores its length alongside the data */
struct my_array {
  size_t size;
  int *data;
};

struct my_array *new_my_array(size_t sz);
void free_my_array(struct my_array *);

#endif

myarray.c

#include "myarray.h"
#include <stdlib.h>
struct my_array *new_my_array(size_t sz) {
  /* space for the struct itself */
  struct my_array *arr = malloc(sizeof *arr);
  /* space for the contained data */
  arr->data = malloc(sz * sizeof *arr->data);
  arr->size = sz;
  return arr;
}

void free_my_array(struct my_array *arr) {
  free(arr->data);
  free(arr);
}

Note: for the more advance C programmers out there, I know this could be done with a flex-array, but it’s an example, there could be two arrays, it could be C89, whatever.

The usage of this should be pretty obvious: struct my_array *arr = new_my_array(N);. Now consider the same problem: the programmer decides int should be long. As I’ve written this, the only change needs to be in myarray.h. int *data becomes long *data, and the allocation in myarray.c is still correct. There is no cast that needs changing and since sizeof uses the variable, it’s transitively modified by the change in the header.

If, on the other hand, the malloc line in new_my_array used a cast and a sizeof(int), the code would still compile, but would be wrong. In this example the change would be required across two files, which is problematic enough, but it could be worse. What if the array is growable? Now there’s another malloc or realloc somewhere that needs to be found. What if the array can grow in more than one place? What if the array can shrink too?

Objections

But what if I want it to work in C++?

C++ doesn’t allow implicitly converting from void* to another data pointer without a cast. This is a valid question, but there are issues with it, mostly arising from the fact that C and C++ are two different languages

Right away, if you have a C++ project that has malloc in it, in that case malloc is a strange thing to use, and should require a cast. C++ has new for dynamic allocation. If you’re allocating an array, you’re better off using std::vector, std::string, or one of the smart pointers if you really think that’s best.

If you actually really need to malloc in C++ (which you probably don’t). Then a C-style cast isn’t the right way to do it. It’s a sledgehammer, whereas C++ has a lot of tools to do what you actually mean. In the malloc case, what you really want is:

int *arr = static_cast<int *>(std::malloc(N * sizeof *arr));

If you’re thinking “that’s clunky and ugly,” I’d agree, but I’m also not someone who uses malloc in C++ (unless I really really need it). If you’re a C++ programmer and you don’t understand the difference between static_cast, dynamic_cast, reinterpret_cast, and const_cast, then seriously, start reading up. A student once asked me: “what’s the right time to use a C-style cast?” and as I told her, “when you’re programming in C.”

That’s not what I mean! I want my library to work in both C and C++

For that you should be using extern “C”.Note: What follows is only tangentially related to the original point of this post

You have written a C library, and you want to link it with C++ code. This is quite normal, and there are two rules of C++ that let us do so. C++ allows one to prefix extern "C" on a function declaration, meaning the name will not be mangled and the linker knows to look for a C-style named function, rather than a C++ style name (which would be mangled). The declaration then appears as extern "C" int f(int);. However, we can’t just throw around extern "C" in C code, because the C language knows nothing about it. The other rule we have says that a conforming C++ compiler must define the preprocessor symbol __cplusplus. Combining these two, one can create a declaration that works in C++ and C:

#ifdef __cplusplus
extern "C"
#endif
void myfunc(int);

In the myarray example, the whole thing can be wrapped in an extern "C" block.

#ifndef MY_ARRAY_H_
#define MY_ARRAY_H_
#include <stddef.h>

#ifdef __cplusplus
extern "C" {
#endif

/* Array type, stores its length alongside the data */
struct my_array {
  size_t size;
  int *data;
};

struct my_array *new_my_array(size_t sz);
void free_my_array(struct my_array *);

#ifdef __cplusplus
} /* close the extern "C" block */
#endif

#endif

Voila, now your C and C++ code can share a header, be compiled as their own language, and still be linked.

$ gcc -std=c89 -Wall -c myarray.c
$ g++ -std=c++11 -Wall -c main.cpp
$ g++ myarray.o main.o -o main

This whole concept may merit more explaining in another post

I don’t really understand what you just said, but I meant that I want to be able to run my code through a C++ compiler as well as a C compiler

Well, in that case you’ll find yourself restricted to using a subset of C89 and C99. You’ll also have more problems to deal with than you realize, there are a lot of things that are perfectly valid C but when put through a C++ compiler, they crash and burn.

If you still prefer to cast malloc()‘s return in C, or use sizeof(type). I’m interested as to why, so please, do tell.

Another Exploding Tuple

After watching Andrei Alexandrescu’s talk on Going Native 2013, I wanted to take a crack at it myself. The presentation covers how to expand a tuple into individual arguments in a function call. Being a Python programmer I’m a little spoiled by func(*args) so the ability to do this in C++11 is something I’m eager to use. What I came up with wound up being quite similar, but more flexible. I wanted to make it more generic, to work with std::pair and std::array. The version presented in that video is incredibly powerful, but it can go a bit further.

The limitations start at the top level, the explode free function.

template <class F, class... Ts>
auto explode(F&& f, const tuple<Ts...>& t)
    -> typename result_of<F(Ts...)>::type
{
    return Expander<sizeof...(Ts),
      typename result_of<F(Ts...)>::type,
      F,
      const tuple<Ts...>&>::expand(f, t);
}

The tuple& argument allows a means to use result_of to figure out the return type, and sizeof... to determine the size of the tuple itself. This can be accomplished via other means. decltype can be used to figure out the return type. It needs more typing, but removes the need for result_of. As for sizeof..., there is a std::tuple_size available which can reach the same end. Using this makes explode non-variadic. Taking a universal reference, rather than capturing the parameter pack, means different versions for lvalue and rvalue refs aren’t needed.

My initial function (called expand instead) is:

template <typename Functor, typename Tup>
auto expand(Functor&& f, Tup&& tup)
  -> decltype(Expander<std::tuple_size<typename std::remove_reference<Tup>::type>::value, Functor, Tup>::call(
        std::forward<Functor>(f),
        std::forward<Tup>(tup)))
{
    return Expander<
        std::tuple_size<typename std::remove_reference<Tup>::type>::value, 
        Functor, 
        Tup>::call(
          std::forward<Functor>(f),
          std::forward<Tup>(tup));
}

Some things to note:

  1. std::tuple_size works on std::pair (yielding 2) and on std::array (yielding the size of the array).
  2. std::get also supports std::pair and std::array, meaning that now tuple, pair, and array can all work in this context.
  3. std::remove_reference is needed for calling std::tuple_size because tup is a universal reference, and Tup may deduce to an lvalue reference type

The decltype goes through each level of the expansion, until much like the original, it hits a base case and does the call.

#include <cstddef>
#include <tuple>
#include <utility>
#include <type_traits>
#include <array>

template <std::size_t Index, typename Functor, typename Tup>
struct Expander {
  template <typename... Ts>
  static auto call(Functor&& f, Tup&& tup, Ts&&... args)
    -> decltype(Expander<Index-1, Functor, Tup>::call(
        std::forward<Functor>(f),
        std::forward<Tup>(tup),
        std::get<Index-1>(tup),
        std::forward<Ts>(args)...))
  {
    return Expander<Index-1, Functor, Tup>::call(
        std::forward<Functor>(f),
        std::forward<Tup>(tup),
        std::get<Index-1>(tup),
        std::forward<Ts>(args)...);
  }
};

template <typename Functor, typename Tup>
struct Expander<0, Functor, Tup> {
  template <typename... Ts>
  static auto call(Functor&& f, Tup&&, Ts&&... args)
    -> decltype(f(std::forward<Ts>(args)...))
  {
    static_assert(
      std::tuple_size<
          typename std::remove_reference<Tup>::type>::value
        == sizeof...(Ts),
      "tuple has not been fully expanded");
    // actually call the function
    return f(std::forward<Ts>(args)...);
  }
};

template <typename Functor, typename Tup>
auto expand(Functor&& f, Tup&& tup)
  -> decltype(Expander<std::tuple_size<
      typename std::remove_reference<Tup>::type>::value, 
      Functor,
      Tup>::call(
        std::forward<Functor>(f),
        std::forward<Tup>(tup)))
{
  return Expander<std::tuple_size<
      typename std::remove_reference<Tup>::type>::value, 
      Functor,
      Tup>::call(
        std::forward<Functor>(f),
        std::forward<Tup>(tup));
}

A few examples showing the flexibility.

int f(int, double, char);
int g(const char *, int);
int h(int, int, int);

int main() {
    expand(f, std::make_tuple(2, 2.0, '2'));

    // works with pairs
    auto p = std::make_pair("hey", 1);
    expand(g, p); 

    // works with std::arrays
    std::array<int, 3> arr = {{1,2,3}};
    expand(h, arr);
}

Each level of the call takes one argument at a time off the back of the tuple using std::get and the template Index parameter, decrements the index, and recurses. This is a bit hard to imagine, so I’ll illustrate. This sequence is not meant to be taken too literally.

Let’s say I have a tuple of string, int, char, and double. I’ll denote this example tuple as tuple("hello", 3, 'c', 2.0). The expansion would happen something like the following

expand(f, tuple("hello", 3, 'c', 2.0)) 
-> call<4>(f, tuple("hello", 3, 'c', 2.0))
-> call<3>(f, tuple("hello", 3, 'c', 2.0), 2.0)
-> call<2>(f, tuple("hello", 3, 'c', 2.0), 'c', 2.0)
-> call<1>(f, tuple("hello", 3, 'c', 2.0), 3, 'c', 2.0)
-> call<0>(f, tuple("hello", 3, 'c', 2.0), "hello", 3, 'c', 2.0)
-> f("hello", 3, 'c', 2.0)

Of course std::integer_sequence in C++14 turns all of this on its head. Maybe I should’ve implemented that instead…