While I like the market system, it seems to be much more Google Docs heavy than the previous 1-1 system. It is obvious that at least our current War Masters have problems with it without the help of the admins. I wonder if we should either go back to the previous 1-1 system or find an entirely new system in order to avoid such situations in the future...EDIT: if this is in the wrong topic, please move it in the correct one.
So are we any closer to secret sections, and will vaultbuilding be 2 weeks from when we get the sections?Presumably round 1 not till new years now?
Quote from: JonathanCrazyJ on December 17, 2018, 09:50:20 amSo are we any closer to secret sections, and will vaultbuilding be 2 weeks from when we get the sections?Presumably round 1 not till new years now?The countdown started when we got the vaults through pm. Vaultbuilding ends in:No time remainingIs this also propaganda deadline? How can we find out whether we need more effort into propaganda?
Quote from: TheonlyrealBeef on December 17, 2018, 11:54:25 amQuote from: JonathanCrazyJ on December 17, 2018, 09:50:20 amSo are we any closer to secret sections, and will vaultbuilding be 2 weeks from when we get the sections?Presumably round 1 not till new years now?The countdown started when we got the vaults through pm. Vaultbuilding ends in:No time remainingIs this also propaganda deadline? How can we find out whether we need more effort into propaganda?Propaganda doesn't even get mentioned in the rules anymore, not since War 7. It makes that part of the event a fully self motivated activity
I'd like to bring back this thread http://elementscommunity.org/forum/war-archive/petition-ban-sofree/
import asyncioimport aiohttpfrom bs4 import BeautifulSoupimport reimport csvdeckcode = re.compile(r'([4-8][0-9a-v]{2} ){30,60}8p[j-u]')writer = csv.writer(open('result.csv', 'w'), quoting=csv.QUOTE_MINIMAL)sema = asyncio.BoundedSemaphore(16)async def getsite(url, parser = 'html.parser'): async with aiohttp.ClientSession() as session: async with sema, session.get(url) as resp: return BeautifulSoup((await resp.read()).decode('utf-8', errors='surrogateescape'), parser)async def processMatch(x): a = x.div.span.span.a title = a.text soup = await getsite(a['href']) deckposts = soup.find_all('div', class_='post') decks = [None, None] for (post, i) in zip(deckposts, range(2)): dcode = deckcode.search(post.get_text()) decks[i] = dcode and dcode[0] writer.writerow((title.replace(',', '-'), a['href'], *decks))async def processRound(a, page = 1): pages = [] while a: soup = await getsite(a) pages.append(asyncio.gather(*( processMatch(a) for a in soup.find_all('td', class_='subject windowbg2')))) page += 1 a = soup.find('a', class_='navPages', text=str(page)) await asyncio.gather(*pages)async def main(): from sys import argv await processRound(argv[-1])loop = asyncio.get_event_loop()loop.run_until_complete(main())loop.close()