Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Sometimes we may have knowledge about a classification problem that comes from previous studies or some other fundamental knowledge . In these cases we would,
Sometimes we may have knowledge about a classification problem that comes from previous studies or some other fundamental knowledge . In these cases we would, ideally, like to take that information into account. For instance, we might know that 70% of people who get a particular disease are women and about 55% of people who don't get the disease are women based on a previous study. Or we might believe that there should be no connection between a disease and sex, even if the data we're learning may be biased. If we're trying to predict whether a person has that disease given a bunch of information including their sex, how can we make use of that knowledge? Specifically, consider a classification problem where we are trying to predict the binary class label y e {0, 1} given the input x. Further, assume that x has two parts, i.e., x = (21,$2) where r2 6 {0, 1} and we already know the distribution p(x2ly). Derive a classifier p(yx) which explicitly takes advantage of this known distribution but makes no other assumption about the data. Explicitly define what other distributions we need to estimate from the data. If x, is a discrete variable with K choices (i.e., x1 6 {1, 2, ..., K}) and we want to avoid making more assumptions about the data, what are the forms of the distributions and how many parameters do we need to estimate? Be sure to justify your answer and your derivation thoroughly
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started