Note: For this guide, we’ll focus on functions that operate on the scalar preactivations at each neuron individually.
Very frustrating to see this, as large models have shown that scalar activation functions make only a tiny impact when your model is wide enough.
https://arxiv.org/abs/2002.05202v1 shows GLU-based activation functions (2 inputs->1 output) almost universally beat their equivalent scalar functions. IMO there needs to be more work around these kinds of multi-input constructions, as there are much bigger potential gains.
E.g. even for cases where the network only needs static routing (tabular data), transformers sometimes perform magically better than MLPs. This suggests there's something special about self-attention as an "activation function". If that magic can be extracted and made sub-quadratic, it could be a paradigm shift in NN design.