1. List Generator
describe
The following code will report errors. Why?
class A(object): x = 1 gen = (x for _ in xrange(10)) # gen=(x for _ in range(10)) if __name__ == "__main__": print(list(A.gen)) To recommend a python learning advanced extrapolation exchange base 906407826 (Python information sharing)
Answer
The problem is the scope of variables. In gen= (x for in xrange (10)), Gen is a generator in which variables have their own set of scopes and are isolated from other scopes. Therefore, there will be such a problem as NameError: name'x'is not defined, so what is the solution? The answer is: lambda.
class A(object): x = 1 gen = (lambda x: (x for _ in xrange(10)))(x) # gen=(x for _ in range(10)) if __name__ == "__main__": print(list(A.gen))
2. Decorators
describe
I want to write a class decorator to measure function / method runtime
import time class Timeit(object): def __init__(self, func): self._wrapped = func def __call__(self, *args, **kws): start_time = time.time() result = self._wrapped(*args, **kws) print("elapsed time is %s " % (time.time() - start_time)) return result
This decorator can run on ordinary functions:
@Timeit def func(): time.sleep(1) return "invoking function func" if __name__ == '__main__': func() # output: elapsed time is 1.00044410133
But the method of operation will report errors, why?
class A(object): @Timeit def func(self): time.sleep(1) return 'invoking method func' if __name__ == '__main__': a = A() a.func() # Boom!
If I insist on using class decorators, how should I modify them?
Answer
After using class decorator, in the process of calling func function, its corresponding instance will not be passed to _call_ method, resulting in its mehtod unbound. So what is the solution? Descriptor Segao
class Timeit(object): def __init__(self, func): self.func = func def __call__(self, *args, **kwargs): print('invoking Timer') def __get__(self, instance, owner): return lambda *args, **kwargs: self.func(instance, *args, **kwargs)
3.Python Call Mechanism
describe
We know that the _call_ method can be used to overload parentheses. Okay, think the problem is that simple? Naive!
class A(object): def __call__(self): print("invoking __call__ from A!") if __name__ == "__main__": a = A() a() # output: invoking __call__ from A
Now we can see that a() seems to be equivalent to A. call (), which looks very Easy, right? Okay, now I want to die, and I write the following code.
a.__call__ = lambda: "invoking __call__ from lambda" a.__call__() # output:invoking __call__ from lambda a() # output:invoking __call__ from A! To recommend a python learning advanced extrapolation exchange base 906407826 (Python information sharing)
Please explain why a() did not call A. call (). (This question was raised by the predecessor of Prince Bo of USTC)
Answer
The reason is that in Python, the built-in special method of the new class and the attribute dictionary of the instance are isolated from each other. You can see the Python official in detail. File Explanation of this situation
For new-style classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object's type, not in the object's instance dictionary. That behaviour is the reason why the following code raises an exception (unlike the equivalent example with old-style classes):
Officials also gave an example:
class C(object): pass c = C() c.__len__ = lambda: 5 len(c) # Traceback (most recent call last): # File "", line 1, in # TypeError: object of type 'C' has no len()
Back to our example, when we execute A. call = lambda: "invoking call from lambda", we do add a new key to call item in A. dict, but when we execute a(), because of the special method invocation involved, our invoking process will not proceed from a. _ Instead of looking for attributes in dict_, we look for attributes in Tyee (a). dict_. As a result, the situation described above will arise.
4. Descriptors
describe
I want to write an Exam class, whose attribute math is an integer of [0,100], and throw an exception if the assignment is not in this range. I decided to implement this requirement with a descriptor.
class Grade(object): def __init__(self): self._score = 0 def __get__(self, instance, owner): return self._score def __set__(self, instance, value): if 0 <= 0="" 75="" 90="" value="" <="100:" self._score="value" else:="" raise="" valueerror('grade="" must="" be="" between="" and="" 100')="" exam(object):="" math="Grade()" def="" __init__(self,="" math):="" self.math="math" if="" __name__="=" '__main__':="" niche="Exam(math=90)" print(niche.math)="" #="" output="" :="" snake="Exam(math=75)" print(snake.math)="" snake.math="120" output:="" valueerror:grade="" 100!<="" code="">
It seems that everything is all right. But there's a big problem here, trying to explain what it is.
To solve this problem, I rewrote the Grade descriptor as follows:
class Grad(object): def __init__(self): self._grade_pool = {} def __get__(self, instance, owner): return self._grade_pool.get(instance, None) def __set__(self, instance, value): if 0 <= value="" <="100:" _grade_pool="self.__dict__.setdefault('_grade_pool'," {})="" _grade_pool[instance]="value" else:="" raise="" valueerror("fuck")<="" code="">
But this will lead to bigger problems. How can we solve this problem?
Answer
1. The first question is actually very simple. If you run print(niche.math) again, you will find that the output value is 120, then why? This begins with Python's call mechanism. If we call an attribute, the order is to look it up first from the _dict_ of the instance, and then if not, query the class dictionary, the parent dictionary once until it is completely lost. Okay, now back to our problem, we find that in our class Exam, the calling process of self.math is, first, to look up the instantiated instance in _dict_, but not to find it, then to go up to the next level, to look up in our class Exam, good find, return. So that means that all our operations on self.math are operations on the class variable math. Therefore, the problem of variable pollution is caused. So how to solve it? Many comrades may say, well, it's not enough to set the value to a specific instance dictionary in the _set_ function.
So is that okay? The answer is, obviously not. As for why, it involves our Python descriptor mechanism. Descriptor refers to a special class that implements the descriptor protocol. Three descriptor protocols refer to _get_,'set', _delete_ and the new _set_name_ method in Python 3.6, which implements it. Data descriptors are get and set delete set name but only Non-Data descriptor is implemented. So what's the difference? As I said before, if we call an attribute, the order is to look it up first from the _dict_ of the instance, and then, if not, query the class dictionary and the parent dictionary once until we can't find it completely. However, the descriptor factor is not taken into account here. If the descriptor factor is taken into account, then the correct expression should be that if we call an attribute, then the order is to look up from the _ dict__ of the instance first, and then if not, query the class dictionary, the parent dictionary once. Until we can't find it out completely. If the attribute in the class instance dictionary is a Data Descriptors, the attribute in the class instance dictionary is called unconditionally by the descriptor protocol, regardless of whether the attribute exists in the instance dictionary or not. If the attribute in the class instance dictionary is a Non-Data descriptors, the attribute value in the instance dictionary is invoked first without triggering the description. The descriptor protocol triggers the Non-Data descriptor protocol if the attribute value does not exist in the instance dictionary. Back to the previous problem, even if we write specific attributes to the instance dictionary in _set_, because there are data descriptors in the class dictionary, we still trigger the descriptor protocol when we call the math attribute.
2. The improved method uses the key uniqueness of dict to bind specific values to instances, but at the same time it brings the problem of memory leak. So why does memory leak occur? First of all, review our dict's characteristics. The most important feature of dict is that all objects that can be hash can be key. dict uses the uniqueness of hash value (strictly speaking, it is not unique, but its hash value collision probability is very small, approximate identification). The key reference in dict is a strong reference type, which will increase the reference count of the corresponding object, and may cause the object to be unable to be gc, thus resulting in memory leak. So what should we do here? Two methods
The first is:
class Grad(object): def __init__(self): import weakref self._grade_pool = weakref.WeakKeyDictionary() def __get__(self, instance, owner): return self._grade_pool.get(instance, None) def __set__(self, instance, value): if 0 <= value="" <="100:" _grade_pool="self.__dict__.setdefault('_grade_pool'," {})="" _grade_pool[instance]="value" else:="" raise="" valueerror("fuck")<="" code="">
WeakKey Dictionary in the weakref library produces dictionary key s that refer to objects as weak reference types, which do not increase the count of memory references and therefore do not cause memory leaks. Similarly, if we want to avoid strong references to objects by value, we can use WeakValueDictionary.
Second: In Python 3.6, the PEP 487 proposal implemented adds a new protocol for descriptors, which can be used to bind the corresponding objects:
class Grad(object): def __get__(self, instance, owner): return instance.__dict__[self.key] def __set__(self, instance, value): if 0 <= value="" <="100:" instance.__dict__[self.key]="value" else:="" raise="" valueerror("fuck")="" def="" __set_name__(self,="" owner,="" name):="" self.key="name
There are many things involved in this topic. Here is a reference link. invoking-descriptors , Descriptor HowTo Guide , PEP 487 , what`s new in Python 3.6 .
5.Python inheritance mechanism
describe
Try to find the output of the following code.
class Init(object): def __init__(self, value): self.val = value class Add2(Init): def __init__(self, val): super(Add2, self).__init__(val) self.val += 2 class Mul5(Init): def __init__(self, val): super(Mul5, self).__init__(val) self.val *= 5 class Pro(Mul5, Add2): pass class Incr(Pro): csup = super(Pro) def __init__(self, val): self.csup.__init__(val) self.val += 1 p = Incr(5) print(p.val) //To recommend a python learning advanced extrapolation exchange base 906407826 (Python information sharing)
Answer
The output is 36, which can be referred to specifically. New-style Classes , multiple-inheritance
6. Python special method
describe
I wrote a class that implements singleton patterns by overloading new methods.
class Singleton(object): _instance = None def __new__(cls, *args, **kwargs): if cls._instance: return cls._instance cls._isntance = cv = object.__new__(cls, *args, **kwargs) return cv sin1 = Singleton() sin2 = Singleton() print(sin1 is sin2) # output: True
Now I have a bunch of classes to implement as a singleton pattern, so I'm going to write a metaclass to reuse the code:
class SingleMeta(type): def __init__(cls, name, bases, dict): cls._instance = None __new__o = cls.__new__ def __new__(cls, *args, **kwargs): if cls._instance: return cls._instance cls._instance = cv = __new__o(cls, *args, **kwargs) return cv cls.__new__ = __new__o class A(object): __metaclass__ = SingleMeta a1 = A() # what`s the fuck
Ouch, I'm so angry. Why is this wrong? I patched getattribute in this way before. The following code captures all the property calls and prints the parameters.
class TraceAttribute(type): def __init__(cls, name, bases, dict): __getattribute__o = cls.__getattribute__ def __getattribute__(self, *args, **kwargs): print('__getattribute__:', args, kwargs) return __getattribute__o(self, *args, **kwargs) cls.__getattribute__ = __getattribute__ class A(object): # Python 3 is class A(object,metaclass=TraceAttribute): __metaclass__ = TraceAttribute a = 1 b = 2 a = A() a.a # output: __getattribute__:('a',){} a.b
Explain why the patch for getattribute was successful and the new patch failed.
If I insist on using metaclasses to patch new to implement the singleton pattern, how should I modify it?
Answer
In fact, this is the most annoying point. The _new_ in the class is a static method, so it must be replaced with a static method. The answer is as follows:
class SingleMeta(type): def __init__(cls, name, bases, dict): cls._instance = None __new__o = cls.__new__ @staticmethod def __new__(cls, *args, **kwargs): if cls._instance: return cls._instance cls._instance = cv = __new__o(cls, *args, **kwargs) return cv cls.__new__ = __new__o class A(object): __metaclass__ = SingleMeta print(A() is A()) # output: True //To recommend a python learning advanced extrapolation exchange base 906407826 (Python information sharing)