That's actually a good idea. I don't know which method would arrive at the answer faster.
Your answer has a complexity of roughly log(number) * complexity of the different operations of the taylor series
His answer has a complexity of roughly complexity of the different operations of the taylor series.
His seems better :3
My sugestion is actually log(log(number)) + taylor (you can divide by e, if that doesn't work, then by e^2, e^4, e^8, etc. If you find a number that works you can work your way back to refine the value. You only use Taylor once you figured which number you'll take the logarithm). Doing some tests, the real problem is error propagation with my idea, which starts to get ugly with ~ 10 digit numbers. You could probably get rid of that by using powers of 2 and storing ln(2) in the memory to make the conversion.
After thinking a bit, using 1/x is a bad idea if x is very large because of the rate of convergence. Since
ln(z+1) = 1-z+z^2/2-z^3/3+...
and we'd have z+1=1/x, that would mean z=1-1/x and so the term in the position n*x in the series would be (1-1/x)^(n*x)/(n*x) ~ e^(-n)/(n*x), so we'd need roughly O(x) steps to get each decimal place in the result, which becomes a problem for large x.
I also noticed that if you are using floating point numbers (which is how computers store real numbers) you can cheat your answer a bit.
These numbers are stored as a*2^b, so if you store ln(2) in your code you can get the logarithm with
b*ln(2) - ln(1/a).
By construction a is between 1 and 2, so using 1/a gives you a number between 1/2 and 1, that have all fast convergence.