Prior elicitation is a foundational problem in Bayesian statistics, particularly in the context of hypothesis testing and model selection. On one end of the spectrum, it is well known that standard “non-informative” priors used for parameter estimation in contexts where little prior information is available can lead to ill-defined or inconsistent Bayes factors. On the other end, ignoring structural information available in specific problems can lead to procedures with suboptimal (frequentist) properties. This talk will discuss some recent developments in the construction of hierarchical priors for model selection in various settings involving linear and generalized linear models. First, for situations in which no prior information is available, we introduce a new class for Power Expected Priors (PEPs) for generalized linear models that have strong theoretical guarantees and can be easily incorporated into standard MCMC algorithms. Then, we move on to discuss two applications in which the nature of the data and the scientific problem under consideration motivate novel prior distributions on model and/or parameter space. These applications illustrate the advantages of incorporation problem-specific information into statistical models.