BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//InvisionCommunity Events 4.7.23//EN
METHOD:PUBLISH
CALSCALE:GREGORIAN
REFRESH-INTERVAL:PT15M
X-PUBLISHED-TTL:PT15M
X-WR-CALNAME:RMCommunityCalendar
NAME:RMCommunityCalendar
BEGIN:VTIMEZONE
TZID:Europe/London
TZURL:http://tzurl.org/zoneinfo/Europe/London
X-LIC-LOCATION:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20250330T020000Z
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20251026T020000Z
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
SUMMARY:Economic Corner 11 - What should you see after a Deepseek? 0
	1/28/2025
DTSTAMP:20250129T005434Z
SEQUENCE:0
UID:167-7-c3fe8195a3dde498d013e477e2142422@aalbc.com
ORGANIZER;CN="richardmurray":troy@aalbc.com
DESCRIPTION:\n	DeepSeek and the quality of usa finance\n\n	MY THOUGHTS\n
	\n	600 billion dollars. Nvidia lost 17th percent of its value in a day . L
	ike many USA firms or industries outside military products\, they are weak
	 from the 1900s to today.  \n\n	DeepSeek said it cost 5 million dollars t
	o produce a product rivaling any comparative computer program in storage/s
	peed/calculation at  1/20th of the cost. So this proves the value of the 
	usa firms is incorrect. which is my issue. Tesla was given such a high val
	ue. The USA's financial environment allows for a bloating of firms\, like 
	Nvidia\, like Tesla that to be blunt\, have each lost huge market shares w
	hich they shouldn't. The fact that the best electric cars are made in chin
	a exposes Tesla's management to me. The fact that Nvidia who was part of a
	n industry that biden gave billions of investment to and are playing catch
	up exposes the chip industry in the usa. The fact that OpenAI and Anthropi
	c isn't open source\, and have been outed for their financial dysfunction\
	, demanding such investment while not making the code public exposes them.
	 \n\n	Yes\,  I will use this economic corner to share DeepSeek informati
	on as best I can. But my agenda is actually not about DeepSeek but the fin
	ancial argument that the USA has a problem in the investment in technologi
	es. There are those who believe that the one world has already been create
	d and the USA is really the binder to all governments\, in that mindset\, 
	no one is competing because the usa is really\, the interchange between al
	l governments. Human history proves fissures that are wanted\, eventually 
	become real\, even if it takes a long time. The lesson in Chinese industri
	es to all non white European governments\, is to consider how they researc
	h \, how they approach technological development. Is it about the Massachu
	setts institute of technology M.I.T. \, is it about Stanford\, is it about
	 nepotism? I remember being a college student and I remember so often it w
	as blacks who graduated from an oxford or an M.I.T. that would be given op
	portunities but didn't have the imagination or passion to do well with the
	m. And the reason is simple\, as anyone non white european knows\, many pe
	ople\, including many asians that go to college in the usa are more intere
	sted with the appearance of intellect than being an ambitious creative. An
	d for the record\, the black people two generations earlier than mine\, in
	 my bloodline\, earned multiple degrees or graduated from the ivy league s
	chools\, so my position is not about not going to an ivy league school or 
	gaining multiple degrees\, which i find so many black people love to sugge
	st in a very enslaved way when another black person speaks of imaginations
	 speaks of passion. Getting degrees for too many Black Descendent of Ensla
	ved people is a Keeping up with the Jones act\, to compare to other blacks
	 in a view display to whites\,  not an important act to creativity or lea
	rning. The second article below may convince you\, of my point in this eco
	nomic corner\, which has been uttered by many Black DOSers since the end o
	f the war between the states in the usa. \n\n	I quote the first article b
	elow\, and the source article the quotes are from are present.\n\n\n\n	Lia
	ng told Chinese tech publication 36Kr that the decision was motivated by s
	cientific curiosity\, not a desire to make a profit. “I couldn’t find 
	a commercial reason to start DeepSeek even if you asked me\,” he said.
	 “Because it’s not commercially viable. Basic research has a very low 
	return on investment. When OpenAI’s early investors gave it money\, they
	 probably didn’t think about the return they would get. Rather\, they re
	ally wanted to do this business.”\n\n	...\n\n	While OpenAI o1 costs $15 
	per million incoming tokens and $60 per million outgoing tokens\, the Deep
	Seek Reasoner API based on the R1 model offers $0.55 per million incoming 
	tokens and $2.19 per million outgoing tokens.\n\n	...\n\n	To train its mod
	els\, the High-Flyer hedge fund purchased more than 10\,000 NVIDIA H100 GP
	Us before the US export restrictions were introduced in 2022. Billionaire 
	and Scale AI CEO Alexander Wang recently told CNBC that he estimates that 
	DeepSeek now has about 50\,000 NVIDIA H100 chips that they cannot talk abo
	ut precisely because of US export controls. If this estimate is correct\, 
	then compared to the leading companies in the AI industry\, such as OpenAI
	\, Google\, and Anthropic\, this is very small. After all\, each of them h
	as more than 500\,000 GPUs.\n\n	...\n\n	 This also calls into question th
	e feasibility of the Stargate project\, an initiative under which OpenAI\,
	 Oracle\, and SoftBank promise to build next-generation AI data centers in
	 the United States\, allegedly willing to spend up to $500 billion.\n\n\n\
	n	\n\n	Deepseek provides detailed technical reports explaining how the mod
	els work\, as well as code that anyone can look at and try to copy.\n\n	Co
	de on hugging face\n\n	https://huggingface.co/deepseek-ai/DeepSeek-R1\n\n\
	n\n	The code on GitHub\n\n	https://github.com/deepseek-ai/DeepSeek-R1\n\n\
	n\n	referral\n\n	https://fortune.com/2025/01/27/deepseek-just-flipped-the-
	ai-script-in-favor-of-open-source-and-the-irony-for-openai-and-anthropic-i
	s-brutal/\n\n\n\n	 \n\n\n\n	ARTICLES\n\n\n\n	Where DeepSeek came from and
	 who is behind the AI lab that shocked Silicon Valley\n\n	Taras Mishchenko
	\n\n	Editor-in-Chief of Mezha.Media. Taras has more than 15 years of exper
	ience in IT journalism\, writes about new technologies and gadgets.\n\n\n\
	n	28.01.2025 at 09:56\n\n	A new artificial intelligence model DeepSeek-R1 
	from the Chinese laboratory DeepSeek appeared as if from nowhere. For the 
	general public\, the first mentions of it began to appear in the media onl
	y last week\, and now it seems that everyone is talking about DeepSeek. Mo
	reover\, in just a week\, the DeepSeek app has overtaken the well-known Ch
	atGPT in the US App Store rankings. The model has also skyrocketed to the 
	top downloadson the Hugging Face developer platform\, asdevelopers are rus
	hing to try it out and understand what this release can bring to their AI 
	projects. So\, logical questions arise: where did DeepSeek come from\, who
	 is behind this startup\, and why has it made so much noise. I will try to
	 answer them in this article.\n\n\n\n	Where DeepSeek came from\n\n	Given t
	he history of Chinese tech companies\, DeepSeek should have been a project
	 of giants like Baidu\, Alibaba\, or ByteDance. But this AI lab was launch
	ed in 2023 by High-Flyer\, a Chinese hedge fund founded in 2015 by entrepr
	eneur Liang Wenfeng. He made a fortune using AI and algorithms to identify
	 patterns that could affect stock prices. The hedge fund quickly gained po
	pularity in China\, and was able to raise more than 100 billion yuan (abou
	t $15 billion). Since 2021\, this figure has dropped to about $8 billion\,
	 but High-Flyer is still one of the most important hedge funds in the coun
	try.\n\n\n\n	As High-Flyer’s core business overlapped with the developme
	nt of AI models\, the hedge fund accumulated GPUs over the years and creat
	ed Fire-Flyer supercomputers to analyze financial data. In the wake of the
	 growing popularity of ChatGPT\, a chatbot from the American company OpenA
	I\, Liang\, who also holds a master’s degree in computer science\, decid
	ed in 2023 to invest his fund’s resources in a new company called DeepSe
	ek\, which was to create its own advanced models and develop general artif
	icial intelligence (AGI).\n\n\n\n	Liang told Chinese tech publication 36Kr
	 [ https://36kr.com/p/2272896094586500 ] that the decision was motivated
	 by scientific curiosity\, not a desire to make a profit. “I couldn’t 
	find a commercial reason to start DeepSeek even if you asked me\,” he sa
	id. “Because it’s not commercially viable. Basic research has a very l
	ow return on investment. When OpenAI’s early investors gave it money\, t
	hey probably didn’t think about the return they would get. Rather\, they
	 really wanted to do this business.”\n\n\n\n	According to Liang\, when h
	e assembled DeepSeek’s R&amp\;D team\, he also didn’t look for experie
	nced engineers to build a consumer-facing product. Instead\, he focused on
	 doctoral students from top universities in China\, including Peking Unive
	rsity\, Tsinghua University\, and Beihang University\, who were eager to p
	rove themselves. Many of them had published in top journals and won awards
	 at international academic conferences\, but had no industry experience\, 
	according to Chinese technology publication QBitAI. [ https://www.qbitai.
	com/2025/01/241000.html \; identity of workers at DeepSeek] \n\n\n\n	“
	Our main technical positions are mostly filled by people who graduated thi
	s year or within the last one or two years\,” Liang said in an interview
	 in 2023. He believes that students may be better suited for high-investme
	nt\, low-return research. “Most people\, when they are young\, can fully
	 commit to a mission without utilitarian considerations\,” Liang explain
	ed. His pitch to potential employees is that DeepSeek was created to “so
	lve the world’s toughest questions.”\n\n\n\n	Liang\, who is personally
	 involved in DeepSeek’s development\, uses the proceeds from his hedge f
	und to pay high salaries to top AI talent. Along with TikTok owner ByteDan
	ce\, DeepSeek is known in China for providing top compensation to AI engin
	eers\, and staff are based in offices in Hangzhou and Beijing.\n\n\n\n	Lia
	ng positions DeepSeek as a uniquely “local” company\, staffed by PhDs 
	from leading Chinese universities. In an interview with the domestic press
	 last year\, he said that his core team “didn’t have any people who ca
	me back from abroad. They are all local… We have to develop the best tal
	ent ourselves.” DeepSeek’s identity as a purely Chinese LLM company ha
	s earned it popularity at home\, as this approach is fully in line with Ch
	inese government policy.\n\n\n\n	This week\, Liang was the only representa
	tive of China’s AI industry chosen to participate in a highly publicized
	 meeting of entrepreneurs with the country’s second-in-command\, Li Qian
	g. Entrepreneurs were told to “focus on breakthroughs in key technologie
	s.”\n\n\n\n	Not much is known about how DeepSeek started building its ow
	n large language models (LLMs)\, but the lab quickly opened their source c
	ode\, and it is likely that\, like many Chinese AI developers\, it relied 
	on open source projects created by Meta\, such as the Llama model and the 
	Pytorch machine learning library. At the same time\, DeepSeek’s particul
	ar focus on research makes it a dangerous competitor for OpenAI\, Meta\, a
	nd Google\, as the AI lab is\, at least for now\, willing to share its dis
	coveries rather than protect them for commercial gain. DeepSeek has not ra
	ised funds from outside and has not yet taken significant steps to monetiz
	e its models. However\, it is not known for certain whether the Chinese go
	vernment is involved in financing the company.\n\n\n\n	What makes the Deep
	Seek-R1 AI model unique\n\n	In November\, DeepSeek first announced that it
	 had achieved performance that surpassed the leading-edge OpenAI o1 model\
	, but at the time it only released a limited R1-lite-preview model. With t
	he release of the full DeepSeek-R1 model last week and the accompanying wh
	ite paper\, the company introduced a surprising innovation: a deliberate d
	eparture from the traditional supervised fine-tuning (SFT) process that is
	 widely used for training large language models (LLMs).\n\n\n\n	SFT is a s
	tandard approach for AI development and involves training models on prepar
	ed datasets to teach them step-by-step reasoning\, often referred to as a 
	chain of thought (CoT). However\, DeepSeek challenged this assumption by s
	kipping SFT entirely and instead relying on reinforcement learning (RL) to
	 train DeepSeek-R1.\n\n\n\n	According to Jeffrey Emanuel\, a serial invest
	or and CEO of blockchain company Pastel Network\, DeepSeek managed to outp
	ace Anthropic in the application of the chain of thought (CoT)\, and now t
	hey are practically the only ones\, apart from OpenAI\, who have made this
	 technology work on a large scale.\n\n\n\n	At the same time\, unlike OpenA
	I\, which is incredibly secretive about how these models actually work at 
	a low level and does not provide the actual model weights to anyone other 
	than partners like Microsoft\, these DeepSeek models are completely open a
	nd permissively licensed. They have released extremely detailed technical 
	reports explaining how the models work\, as well as code that anyone can l
	ook at and try to copy.\n\n\n\n	With R1\, DeepSeek essentially cracked one
	 of the holy grails of AI: getting models to reason step by step without r
	elying on massive teacher datasets. Their DeepSeek-R1-Zero experiment show
	ed something remarkable: using pure reinforcement learning with carefully 
	designed reward functions\, the researchers were able to get the models to
	 develop complex reasoning capabilities completely autonomously. It wasn
	’t just problem solving-the model organically learned to generate long c
	hains of thought\, check its own work\, and allocate more computational ti
	me to more complex problems.\n\n\n\n	In this way\, the model learned to re
	vise its thinking on its own. What is particularly interesting is that dur
	ing training\, DeepSeek observed what they called an “aha moment\,” a 
	phase when the model spontaneously learned to revise its chain of thought 
	mid-process when faced with uncertainty. This sudden behavior was not expl
	icitly programmed\, but arose naturally from the interaction between the m
	odel and the reinforcement learning environment. The model literally stopp
	ed itself\, flagged potential problems in its reasoning\, and restarted wi
	th a different approach\, all without being explicitly trained to do so.\n
	\n\n\n	DeepSeek also solved one of the main problems in reasoning models: 
	language consistency. Previous attempts at chain-of-thought reasoning ofte
	n resulted in models mixing languages or producing incoherent output. Deep
	Seek solved this problem by smartly rewarding language consistency during 
	RL training\, sacrificing a slight performance hit for a much more readabl
	e and consistent output.\n\n\n\n	As a result\, DeepSeek-R1 achieves high a
	ccuracy and efficiency. At AIME 2024\, one of the toughest math competitio
	ns for high school students\, R1 achieved 79.8% accuracy\, which is in lin
	e with OpenAI’s o1 model. At MATH-500\, it reached 97.3%\, and at the Co
	deforces programming competition\, it reached the 96.3 percentile. But per
	haps most impressively\, DeepSeek was able to distill these capabilities d
	own to much smaller models: their 14 billion-parameter version outperforms
	 many models several times its size\, showing that reasoning power depends
	 not only on the number of parameters but also on how you train the model 
	to process information.\n\n\n\n	However\, the uniqueness of DeepSeek-R1 li
	es not only in the new approach to model training\, but also in the fact t
	hat it is the first time a Chinese AI model has gained such great populari
	ty in the West. Users\, of course\, immediately went to ask it questions a
	bout Tiananmen Square and Taiwan that were sensitive to the Chinese govern
	ment\, and quickly realized that DeepSeek was censored. Indeed\, it would 
	be futile to expect a Chinese AI lab to not comply with Chinese law or pol
	icy.\n\n\n\n	However\, many developers consider this censorship to be an i
	nfrequent extreme case in real-world use that can be mitigated by fine-tun
	ing. Therefore\, it is unlikely that the issue of ethical use of DeepSeek-
	R1 will stop many developers and users who want to get access to the lates
	t AI development and essentially for free.\n\n\n\n	Of course\, for many\, 
	the security of the data remains a question mark\, as DeepSeek-R1 probably
	 stores it on Chinese servers. But as a precautionary measure\, you can tr
	y the model on Hugging Face in sandbox mode [ https://huggingface.co/deep
	seek-ai/DeepSeek-R1 ] \, or even run it locally on your PC if you have th
	e necessary hardware. In such cases\, the model will not be fully function
	al\, but it will remove the issue of data transfer to Chinese servers.\n\n
	\n\n	How much did it cost to develop DeepSeek-R1?\n\n	To train its models\
	, the High-Flyer hedge fund purchased more than 10\,000 NVIDIA H100 GPUs b
	efore the US export restrictions were introduced in 2022. Billionaire and 
	Scale AI CEO Alexander Wang recently told CNBC that he estimates that Deep
	Seek now has about 50\,000 NVIDIA H100 chips that they cannot talk about p
	recisely because of US export controls. If this estimate is correct\, then
	 compared to the leading companies in the AI industry\, such as OpenAI\, G
	oogle\, and Anthropic\, this is very small. After all\, each of them has m
	ore than 500\,000 GPUs.\n\n\n\n	According to NVIDIA engineer Jim Fan\, Dee
	pSeek trained its base model\, called V3\, with a budget of $5.58 million 
	over two months. However\, it is difficult to estimate the total cost of t
	raining DeepSeek-R1. The use of 60\,000 NVIDIA GPUs could potentially cost
	 hundreds of millions of dollars\, so the exact figures remain speculative
	.\n\n\n\n	Why DeepSeek-R1 shocked Silicon Valley\n\n	DeepSeek largely disr
	upts the business model of OpenAI and other Western companies working on t
	heir own closed AI models. After all\, DeepSeek-R1 not only performs bette
	r than the best open-source alternative\, Llama 3 by Meta. The model trans
	parently shows the entire chain of thought in its answers. This is a blow 
	to the reputation of OpenAI\, which has hitherto hidden the thought chains
	 of its models\, citing trade secrets and the fact that it does not want t
	o embarrass users when the model is wrong.\n\n\n\n	In addition\, DeepSee
	k’s success emphasizes that cost-effective and efficient AI development 
	methods are realistic. We have already determined that in the case of a Ch
	inese company\, it is difficult to calculate the cost of development\, and
	 there may always be “surprises” in the form of multi-billion dollar g
	overnment funding. But at the moment\, DeepSeek-R1\, with a similar level 
	of accuracy to OpenAI o1\, is much cheaper for developers. While OpenAI o1
	 costs $15 per million incoming tokens and $60 per million outgoing tokens
	\, the DeepSeek Reasoner API based on the R1 model offers $0.55 per millio
	n incoming tokens and $2.19 per million outgoing tokens.\n\n\n\n	However\,
	 while DeepSeek’s innovations are groundbreaking\, they have by no means
	 given the Chinese AI lab market leadership. As DeepSeek has published its
	 research\, other AI model development companies will learn from it and ad
	apt. Meta and Mistral\, a French open-source model development company\, m
	ay be a bit behind\, but it will probably only take them a few months to c
	atch up with DeepSeek. As Ian LeCun\, a leading AI researcher at Meta\, sa
	id: “The idea is that everyone benefits from the ideas of others. No one
	 is “ahead” of anyone and no country is “losing” to another. No on
	e has a monopoly on good ideas. Everyone learns from everyone.”\n\n\n\n	
	DeepSeek’s offerings are likely to continue to lower the cost of using A
	I models\, which will benefit not only ordinary users but also startups an
	d other businesses interested in AI. But if developing a DeepSeek-R1 model
	 with fewer resources does turn out to be a reality\, it could be a proble
	m for AI companies that have invested heavily in their own infrastructure.
	 In particular\, years of operating and capital expenditures by OpenAI and
	 others could be wasted.\n\n\n\n	The market doesn’t yet know the final a
	nswer to whether AI development will indeed require less computing power i
	n the future\, but it is already reacting nervouslywith a drop in shares o
	f NVIDIA and other suppliers of AI data center components. This also calls
	 into question the feasibility of the Stargate project\, an initiative und
	er which OpenAI\, Oracle\, and SoftBank promise to build next-generation A
	I data centers in the United States\, allegedly willing to spend up to $50
	0 billion.\n\n\n\n	But on the other hand\, while American companies will s
	till have excess capacity for the development of artificial intelligence\,
	 China’s DeepSeek\, with the US export restrictions on chips still in pl
	ace\, may face a severe shortage. If we assume that resource constraints h
	ave indeed pushed it to innovate and allowed it to create a competitive pr
	oduct\, the lack of computing power will simply prevent it from scaling\, 
	while competitors will catch up. Therefore\, despite all the innovation of
	 DeepSeek\, it is still too early to say that Chinese companies will be ab
	le to compete with Western AI tech giants\, even if we put aside the issue
	s of censorship and data security.\n\n\n\n	URL\n\n	https://mezha.media/en/
	articles/where-deepseek-came-from-and-who-is-behind-the-ai-lab-that-shocke
	d-silicon-valley\n\n\n\n	 \n\n\n\n	Question and Answer excerpts
	 from 疯狂的幻方：一家隐形AI巨头的大模型之路\n\n	...\n\n
		36Kr: What deductions and assumptions have we made about the business mod
	el?\n\n\n\n	Liang Wenfeng: What we want now is that we can share most of o
	ur training results publicly\, so that it can be combined with commerciali
	zation. We hope that more people\, even a small app\, can use large models
	 at a low cost\, instead of technology only in the hands of some people an
	d companies\, forming a monopoly.\n\n	...\n\n	36Kr: In any case\, it's a b
	it crazy for a commercial company to do a kind of research exploration wit
	h unlimited investment.\n\n\n\n	Liang Wenfeng: If you have to find a comme
	rcial reason\, it may not be found\, because it can't be done.\n\n\n\n	Fro
	m a business point of view\, basic research has a very low return on inves
	tment. When OpenAI's early investors invested money\, they must not have t
	hought about how much return I would get back\, but really wanted to do it
	.\n\n\n\n	What we are more certain now is that since we want to do this an
	d have the ability\, we are one of the most suitable candidates at this po
	int in time.\n\n	...\n\n	36Kr: How would you see the competitive landscape
	 of large models?\n\n\n\n	Liang Wenfeng: Large manufacturers definitely ha
	ve advantages\, but if they can't be applied quickly\, they may not be abl
	e to continue to adhere to them\, because they need to see results.\n\n\n\
	n	The top startups also have solid technology\, but like the old wave of A
	I startups\, they have to face commercialization problems.\n\n	...\n\n	36K
	r: Talents for large-scale model entrepreneurship are also scarce\, and so
	me investors say that many suitable talents may only be in the AI labs of 
	giants such as OpenAI and FacebookAI Research. Do you go overseas to poach
	 this kind of talent?\n\n\n\n	Liang Wenfeng: If you are pursuing short-ter
	m goals\, it is right to find someone with existing experience. But if you
	 look at the long term\, experience is not so important\, but basic abilit
	y\, creativity\, passion\, etc. are more important. From this point of vie
	w\, there are many suitable candidates in China.\n\n\n\n	36Kr: Why isn't e
	xperience so important?\n\n\n\n	Liang Wenfeng: You don't have to be able t
	o do this by someone who has done this. High-Flyer's principle of recruiti
	ng people is to look at ability\, not experience. Our core technical posit
	ions are basically mainly fresh graduates and those who have graduated for
	 one or two years.\n\n\n\n	36Kr: Do you think experience is an obstacle wh
	en it comes to innovating business?\n\n\n\n	Liang Wenfeng: When you do som
	ething\, experienced people will tell you without thinking that you should
	 do it\, but people without experience will repeatedly explore and think s
	eriously about what should be done\, and then find a solution that is in l
	ine with the current actual situation.\n\n\n\n	36Kr: High-Flyer has entere
	d the industry from a layman with no financial genes at all\, and has beco
	me the head in a few years\, is this recruitment rule one of the secrets?\
	n\n\n\n	Liang Wenfeng: Our core team\, even myself\, didn't have quantitat
	ive experience at the beginning\, which is very special. It can't be said 
	to be the secret of success\, but it's one of the cultures of High-Flyer. 
	We don't deliberately shy away from experienced people\, but it's more abo
	ut ability.\n\n\n\n	Take the sales position as an example. Our two main sa
	les officers are both amateurs in this industry. One was originally engage
	d in the foreign trade of German machinery categories\, and the other was 
	originally written in the background of the brokerage. When they enter the
	 industry\, they have no experience\, no resources\, no accumulation.\n\n\
	n\n	And now we may be the only big private equity firm that can focus on d
	irect sales. Doing direct selling means that there is no need to divide th
	e fees to the middlemen\, and the profit margin is higher under the same s
	cale and performance\, and many companies will try to imitate us\, but the
	y do not succeed.\n\n\n\n	36Kr: Why are many families trying to imitate yo
	u\, but they are not successful?\n\n\n\n	Liang Wenfeng: Because that's not
	 enough for innovation to happen. It needs to match the culture and manage
	ment of the company.\n\n\n\n	In fact\, they couldn't do anything in the fi
	rst year\, and only in the second year did they start to make some progres
	s. But our assessment criteria are different from those of ordinary compan
	ies. We don't have KPIs and we don't have so-called tasks.\n\n\n\n	36Kr: W
	hat are your assessment criteria?\n\n\n\n	Liang Wenfeng: We are not like o
	rdinary companies\, we value the number of orders placed by customers\, an
	d our sales sales and commissions are not good at the beginning\, but will
	 encourage sales to develop their own circles\, meet more people\, and hav
	e greater influence.\n\n\n\n	Because we believe that an honest salesperson
	 who can be trusted by customers may not be able to get customers to place
	 orders in a short period of time\, but it can make you feel that he is a 
	reliable person.\n\n	URL\n\n	https://36kr.com/p/2272896094586500\n\n\n\n	
	 \n\n\n\n	Prior entry\n\n\n\n	https://aalbc.com/tc/topic/11445-economicco
	rner010/\n\n\n\n	POST URL\n\n	https://aalbc.com/tc/topic/11447-economiccor
	ner011/\n\n\n\n	PRIOR EDITION\n\n\n\n	https://aalbc.com/tc/events/event/16
	6-economic-corner-10-online-divestiture- 01282025/\n\n\n\n	NEXT EDITION\n
	\n\n\n	https://aalbc.com/tc/events/event/193-economic-corner-12-02122025/\
	n\n\n\n	 \n\n\n\n	\n\n
DTSTART;VALUE=DATE:20250128
RRULE:FREQ=YEARLY;INTERVAL=1
END:VEVENT
END:VCALENDAR
