In his analysis of /dˤ/-variation in Saudi Arabian newscasting, Al-Tamimi (2020) finds unpredicatble variability
between the standard variant [dˤ] and the non-standard variant [ðˤ] in different in-words positions, in different
phonetic environments, and in semantically ‘content’ and suprasegmentally ‘stressed’ lexical itmes assumed to
favor the standard variant. He even finds in many of these lexical items an unusual realizational flucatuation
between the two variants. The present exploratory and ‘theory-testing’ study aims to find a reasonable account for
these findings through examining the explanatory adequacy of a number of available phonological theories,
notions, models and proposals that have made different attempts to accommodate variation, and this includes
Coexistent Phonemic Systems, Standard Generative Phonology, Lexical Diffusion, Variable Rules, Poly-Lectal
Grammar, Articulatory Phonology, different versions of the Optimality Theory, in addition to the
Multiple-Trace-Model, as represented by Al-Tamimi’s (2005) Multiple-Trace-Based Proposal. The study reveals
the strengths and weaknesses of these theories in embracing the variability in the data, and concludes that the
Multiple-Trace-Based Proposal can relatively offer the best insight as its allows variation to be directly encoded
in the underlying representations of lexical items, a status strictly prohibited by the rest of the theories that adopt
invariant lexical representations in consonance with the ‘Homogeneity Doctrine’.