This paper describes an architecture for symbolic and subsymbolic interactions during massively-parallel processing of natural language recognition. The model is centered around a graph-based constraint propagation network which is connected to a recurrent neural network which provides contextually sensitive predictions. The integration of symbolic massive parallelism and subsymbolic neural net PDP processing provides a smooth a posteriori learning to the symbolic system and a focused guided learning as well as strong constraints during recognition to the neural network. As the result, the architecture provides the ability to handle strict and structured symbolic constraints during recognition while attaining a smooth contextual prediction applied with a least rigidity and learning sentential regularities from actual dialog samples.