Skip to content

Activations¤

Trainable nonlinearities.

Note

Key notes:

  • Stan is a self-scalable tanh: \(\operatorname{tanh}(x)\,(1+\beta x)\).
  • AdaptiveActivation wraps \(\sigma\) as \(x\mapsto\sigma(ax)\).

phydrax.nn.Stan ¤

Self-scalable tanh (Stan) activation.

Applies

\[ \text{Stan}_\beta(x)=\tanh(x)\,(1+\beta x), \]

with trainable \(\beta\) (scalar or broadcastable array).

__init__(shape: int | collections.abc.Sequence[int] | None = None, *, key: Key[Array, ''] = jr.key(0)) ¤

Arguments:

  • shape: Shape of \(\beta\) (use None for a scalar).
  • key: PRNG key (unused; included for API compatibility).
__call__(x: Array) -> Array ¤

Apply \(\text{Stan}_\beta\) to x.

Computes \(\tanh(x)\,(1+\beta x)\) with broadcasting over the shape of beta.


phydrax.nn.AdaptiveActivation ¤

Adaptive activation wrapper.

Wraps an activation \(\sigma\) as

\[ x\mapsto\sigma(a x), \]

where \(a\) is a trainable scalar (layer-wise) or broadcastable vector (neuron-wise).

__init__(fn: collections.abc.Callable[[Array], Array], /, *, shape: int | collections.abc.Sequence[int] | None = None, key: Key[Array, ''] = jr.key(0)) ¤

Arguments:

  • fn: Base activation function \(\sigma\).
  • shape: Shape of the trainable coefficient \(a\) (use None for a scalar).
  • key: PRNG key (unused; included for API compatibility).
__call__(x: Array) -> Array ¤

Apply the adaptive activation to x.

Computes \(\sigma(a x)\) where \(\sigma\) is the wrapped fn and \(a\) is the trainable coefficient.