All convolutions inside a dense block are ReLU-activated and use batch normalization. Channel-sensible concatenation is barely feasible if the peak and width Proportions of the data remain unchanged, so convolutions inside a dense block are all of stride one. Pooling layers are inserted between dense blocks for more dimensionality https://financefeeds.com/why-dogecoin-price-could-drop-over-50-if-elon-musk-does-remittix-strives-for-gold/