I would say that it's not only feasible; it's inevitible.
PROVIDED:
Humans can come to agreements regarding what is or is not 'intelligence,' 'mind,' or any number of other, related, concepts.
If the question is whether there will be a human-designed machine capable of displaying behaviours consistent with our definitions of the above, then yes, yes there will. However, chances are we won't have a standard by which to judge the birth of strong AI when it happens; strong AI will be required to help sort out the puzzle of whether it is actually strong AI, or a simulcrum thereof.
Every time a machine performs better than a human in a given field (from chess to medical diagnosis), we shift the goalposts further and further away from acceptance of machine intelligence. The day is swiftly approaching (well before 2112 [Rush reference?]) when we, as a species, will have no domain left in which we can demonstrate clear superiority. That will be the day that we accept strong AI.