diff --git a/docs/api/configuration.rst b/docs/api/configuration.rst
index 4707b4a12ef3acdc71fe16cc4af3eba6b88aff18..d2511950ed08717d84f91a2539f369da450ee736 100644
--- a/docs/api/configuration.rst
+++ b/docs/api/configuration.rst
@@ -68,7 +68,7 @@ It looks like this:
    XilinxPart: xcku115-flvb2104-2-i
    ClockPeriod: 5
 
-   IOType: io_parallel # options: io_serial/io_parallel
+   IOType: io_parallel # options: io_parallel/io_stream
    HLSConfig:
      Model:
        Precision: ap_fixed<16,6>
@@ -91,7 +91,7 @@ There are a number of configuration options that you have.  Let's go through the
 * **XilinxPart**\ : the particular FPGA part number that you are considering, here it's a Xilinx Virtex-7 FPGA
 * **ClockPeriod**\ : the clock period, in ns, at which your algorithm runs
   Then you have some optimization parameters for how your algorithm runs:
-* **IOType**\ : your options are ``io_parallel`` or ``io_serial`` where this really defines if you are pipelining your algorithm or not
+* **IOType**\ : your options are ``io_parallel`` or ``io_stream`` which defines the type of data structure used for inputs, intermediate activations between layers, and outputs. For ``io_parallel``, arrays are used that, in principle, can be fully unrolled and are typically implemented in RAMs. For ``io_stream``, HLS streams are used, which are a more efficient/scalable mechanism to represent data that are produced and consumed in a sequential manner. Typically, HLS streams are implemented with FIFOs instead of RAMs. For more information see `here <https://docs.xilinx.com/r/en-US/ug1399-vitis-hls/pragma-HLS-stream>`__.
 * **ReuseFactor**\ : in the case that you are pipelining, this defines the pipeline interval or initiation interval
 * **Strategy**\ : Optimization strategy on FPGA, either "Latency" or "Resource". If none is supplied then hl4ml uses "Latency" as default. Note that a reuse factor larger than 1 should be specified when using "resource" strategy. An example of using larger reuse factor can be found `here. <https://github.com/hls-fpga-machine-learning/models/tree/master/keras/KERAS_dense>`__
 * **Precision**\ : this defines the precsion of your inputs, outputs, weights and biases. It is denoted by ``ap_fixed<X,Y>``\ , where ``Y`` is the number of bits representing the signed number above the binary point (i.e. the integer part), and ``X`` is the total number of bits.