We present a method to model sound descriptor temporal profiles using Segmental Models. Unlike standard HMM, such an approach allows for the modeling of fine structures of temporal profiles with a reduced number of states. These states, we called primitives, can be chosen by the user using prior knowledge, and assembled to model symbolic musical elements. In this paper, we describe this general methodology and evaluate it on a dataset made of of violin recording containing crescendo/decrescendo, glissando and sforzando. The results show that, in this context, the segmental model can segment and recognize these different musical elements with a satisfactory level.